00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 1007 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3674 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.076 The recommended git tool is: git 00:00:00.077 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.122 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.161 Using shallow fetch with depth 1 00:00:00.161 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.162 > git --version # timeout=10 00:00:00.196 > git --version # 'git version 2.39.2' 00:00:00.196 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.220 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.220 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.669 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.682 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.695 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.695 > git config core.sparsecheckout # timeout=10 00:00:07.708 > git read-tree -mu HEAD # timeout=10 00:00:07.724 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.746 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.747 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.832 [Pipeline] Start of Pipeline 00:00:07.846 [Pipeline] library 00:00:07.848 Loading library shm_lib@master 00:00:07.849 Library shm_lib@master is cached. Copying from home. 00:00:07.865 [Pipeline] node 00:00:07.878 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:00:07.879 [Pipeline] { 00:00:07.890 [Pipeline] catchError 00:00:07.892 [Pipeline] { 00:00:07.906 [Pipeline] wrap 00:00:07.915 [Pipeline] { 00:00:07.922 [Pipeline] stage 00:00:07.923 [Pipeline] { (Prologue) 00:00:07.944 [Pipeline] echo 00:00:07.946 Node: VM-host-SM17 00:00:07.955 [Pipeline] cleanWs 00:00:07.965 [WS-CLEANUP] Deleting project workspace... 00:00:07.965 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.970 [WS-CLEANUP] done 00:00:08.230 [Pipeline] setCustomBuildProperty 00:00:08.297 [Pipeline] httpRequest 00:00:11.319 [Pipeline] echo 00:00:11.320 Sorcerer 10.211.164.101 is dead 00:00:11.329 [Pipeline] httpRequest 00:00:13.274 [Pipeline] echo 00:00:13.275 Sorcerer 10.211.164.101 is alive 00:00:13.286 [Pipeline] retry 00:00:13.288 [Pipeline] { 00:00:13.301 [Pipeline] httpRequest 00:00:13.306 HttpMethod: GET 00:00:13.306 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.307 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.329 Response Code: HTTP/1.1 200 OK 00:00:13.329 Success: Status code 200 is in the accepted range: 200,404 00:00:13.330 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:28.973 [Pipeline] } 00:00:28.995 [Pipeline] // retry 00:00:29.005 [Pipeline] sh 00:00:29.356 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.388 [Pipeline] httpRequest 00:00:30.608 [Pipeline] echo 00:00:30.610 Sorcerer 10.211.164.101 is alive 00:00:30.621 [Pipeline] retry 00:00:30.623 [Pipeline] { 00:00:30.638 [Pipeline] httpRequest 00:00:30.643 HttpMethod: GET 00:00:30.643 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:30.644 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:30.654 Response Code: HTTP/1.1 200 OK 00:00:30.655 Success: Status code 200 is in the accepted range: 200,404 00:00:30.656 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:20.914 [Pipeline] } 00:01:20.932 [Pipeline] // retry 00:01:20.940 [Pipeline] sh 00:01:21.219 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:24.525 [Pipeline] sh 00:01:24.804 + git -C spdk log --oneline -n5 00:01:24.804 c13c99a5e test: Various fixes for Fedora40 00:01:24.804 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:24.804 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:24.804 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:24.804 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:24.827 [Pipeline] withCredentials 00:01:24.838 > git --version # timeout=10 00:01:24.858 > git --version # 'git version 2.39.2' 00:01:24.875 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:24.877 [Pipeline] { 00:01:24.886 [Pipeline] retry 00:01:24.888 [Pipeline] { 00:01:24.904 [Pipeline] sh 00:01:25.184 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:25.196 [Pipeline] } 00:01:25.209 [Pipeline] // retry 00:01:25.215 [Pipeline] } 00:01:25.232 [Pipeline] // withCredentials 00:01:25.242 [Pipeline] httpRequest 00:01:26.681 [Pipeline] echo 00:01:26.683 Sorcerer 10.211.164.101 is alive 00:01:26.693 [Pipeline] retry 00:01:26.696 [Pipeline] { 00:01:26.710 [Pipeline] httpRequest 00:01:26.715 HttpMethod: GET 00:01:26.716 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:26.717 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:26.718 Response Code: HTTP/1.1 200 OK 00:01:26.719 Success: Status code 200 is in the accepted range: 200,404 00:01:26.719 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:32.781 [Pipeline] } 00:01:32.799 [Pipeline] // retry 00:01:32.808 [Pipeline] sh 00:01:33.087 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:35.006 [Pipeline] sh 00:01:35.286 + git -C dpdk log --oneline -n5 00:01:35.286 eeb0605f11 version: 23.11.0 00:01:35.286 238778122a doc: update release notes for 23.11 00:01:35.286 46aa6b3cfc doc: fix description of RSS features 00:01:35.286 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:35.286 7e421ae345 devtools: support skipping forbid rule check 00:01:35.304 [Pipeline] writeFile 00:01:35.320 [Pipeline] sh 00:01:35.602 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:35.614 [Pipeline] sh 00:01:35.893 + cat autorun-spdk.conf 00:01:35.893 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.893 SPDK_TEST_NVMF=1 00:01:35.893 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.893 SPDK_TEST_URING=1 00:01:35.893 SPDK_TEST_USDT=1 00:01:35.893 SPDK_RUN_UBSAN=1 00:01:35.893 NET_TYPE=virt 00:01:35.893 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:35.893 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:35.893 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.899 RUN_NIGHTLY=1 00:01:35.901 [Pipeline] } 00:01:35.914 [Pipeline] // stage 00:01:35.929 [Pipeline] stage 00:01:35.930 [Pipeline] { (Run VM) 00:01:35.941 [Pipeline] sh 00:01:36.220 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:36.220 + echo 'Start stage prepare_nvme.sh' 00:01:36.220 Start stage prepare_nvme.sh 00:01:36.220 + [[ -n 3 ]] 00:01:36.220 + disk_prefix=ex3 00:01:36.220 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:01:36.220 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:01:36.220 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:01:36.220 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.220 ++ SPDK_TEST_NVMF=1 00:01:36.220 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.220 ++ SPDK_TEST_URING=1 00:01:36.220 ++ SPDK_TEST_USDT=1 00:01:36.220 ++ SPDK_RUN_UBSAN=1 00:01:36.220 ++ NET_TYPE=virt 00:01:36.220 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:36.220 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:36.220 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.220 ++ RUN_NIGHTLY=1 00:01:36.220 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:36.220 + nvme_files=() 00:01:36.220 + declare -A nvme_files 00:01:36.220 + backend_dir=/var/lib/libvirt/images/backends 00:01:36.220 + nvme_files['nvme.img']=5G 00:01:36.220 + nvme_files['nvme-cmb.img']=5G 00:01:36.220 + nvme_files['nvme-multi0.img']=4G 00:01:36.220 + nvme_files['nvme-multi1.img']=4G 00:01:36.220 + nvme_files['nvme-multi2.img']=4G 00:01:36.220 + nvme_files['nvme-openstack.img']=8G 00:01:36.220 + nvme_files['nvme-zns.img']=5G 00:01:36.220 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:36.220 + (( SPDK_TEST_FTL == 1 )) 00:01:36.220 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:36.220 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:36.220 + for nvme in "${!nvme_files[@]}" 00:01:36.220 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:36.220 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.220 + for nvme in "${!nvme_files[@]}" 00:01:36.220 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:36.220 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.220 + for nvme in "${!nvme_files[@]}" 00:01:36.220 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:36.220 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:36.220 + for nvme in "${!nvme_files[@]}" 00:01:36.220 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:36.220 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.220 + for nvme in "${!nvme_files[@]}" 00:01:36.220 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:36.220 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.220 + for nvme in "${!nvme_files[@]}" 00:01:36.220 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:36.220 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:36.220 + for nvme in "${!nvme_files[@]}" 00:01:36.220 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:36.479 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:36.479 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:36.479 + echo 'End stage prepare_nvme.sh' 00:01:36.479 End stage prepare_nvme.sh 00:01:36.491 [Pipeline] sh 00:01:36.812 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:36.813 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:01:36.813 00:01:36.813 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:01:36.813 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:01:36.813 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:36.813 HELP=0 00:01:36.813 DRY_RUN=0 00:01:36.813 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:36.813 NVME_DISKS_TYPE=nvme,nvme, 00:01:36.813 NVME_AUTO_CREATE=0 00:01:36.813 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:36.813 NVME_CMB=,, 00:01:36.813 NVME_PMR=,, 00:01:36.813 NVME_ZNS=,, 00:01:36.813 NVME_MS=,, 00:01:36.813 NVME_FDP=,, 00:01:36.813 SPDK_VAGRANT_DISTRO=fedora39 00:01:36.813 SPDK_VAGRANT_VMCPU=10 00:01:36.813 SPDK_VAGRANT_VMRAM=12288 00:01:36.813 SPDK_VAGRANT_PROVIDER=libvirt 00:01:36.813 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:36.813 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:36.813 SPDK_OPENSTACK_NETWORK=0 00:01:36.813 VAGRANT_PACKAGE_BOX=0 00:01:36.813 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:36.813 FORCE_DISTRO=true 00:01:36.813 VAGRANT_BOX_VERSION= 00:01:36.813 EXTRA_VAGRANTFILES= 00:01:36.813 NIC_MODEL=e1000 00:01:36.813 00:01:36.813 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:01:36.813 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:40.102 Bringing machine 'default' up with 'libvirt' provider... 00:01:40.361 ==> default: Creating image (snapshot of base box volume). 00:01:40.929 ==> default: Creating domain with the following settings... 00:01:40.929 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732777862_200746f54a7e0678be6b 00:01:40.929 ==> default: -- Domain type: kvm 00:01:40.929 ==> default: -- Cpus: 10 00:01:40.929 ==> default: -- Feature: acpi 00:01:40.929 ==> default: -- Feature: apic 00:01:40.929 ==> default: -- Feature: pae 00:01:40.929 ==> default: -- Memory: 12288M 00:01:40.929 ==> default: -- Memory Backing: hugepages: 00:01:40.929 ==> default: -- Management MAC: 00:01:40.929 ==> default: -- Loader: 00:01:40.929 ==> default: -- Nvram: 00:01:40.929 ==> default: -- Base box: spdk/fedora39 00:01:40.929 ==> default: -- Storage pool: default 00:01:40.929 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732777862_200746f54a7e0678be6b.img (20G) 00:01:40.929 ==> default: -- Volume Cache: default 00:01:40.929 ==> default: -- Kernel: 00:01:40.929 ==> default: -- Initrd: 00:01:40.929 ==> default: -- Graphics Type: vnc 00:01:40.929 ==> default: -- Graphics Port: -1 00:01:40.929 ==> default: -- Graphics IP: 127.0.0.1 00:01:40.929 ==> default: -- Graphics Password: Not defined 00:01:40.929 ==> default: -- Video Type: cirrus 00:01:40.929 ==> default: -- Video VRAM: 9216 00:01:40.929 ==> default: -- Sound Type: 00:01:40.929 ==> default: -- Keymap: en-us 00:01:40.929 ==> default: -- TPM Path: 00:01:40.929 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:40.929 ==> default: -- Command line args: 00:01:40.929 ==> default: -> value=-device, 00:01:40.929 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:40.929 ==> default: -> value=-drive, 00:01:40.929 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:40.930 ==> default: -> value=-device, 00:01:40.930 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.930 ==> default: -> value=-device, 00:01:40.930 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:40.930 ==> default: -> value=-drive, 00:01:40.930 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:40.930 ==> default: -> value=-device, 00:01:40.930 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.930 ==> default: -> value=-drive, 00:01:40.930 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:40.930 ==> default: -> value=-device, 00:01:40.930 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.930 ==> default: -> value=-drive, 00:01:40.930 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:40.930 ==> default: -> value=-device, 00:01:40.930 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:40.930 ==> default: Creating shared folders metadata... 00:01:40.930 ==> default: Starting domain. 00:01:42.309 ==> default: Waiting for domain to get an IP address... 00:02:00.393 ==> default: Waiting for SSH to become available... 00:02:00.393 ==> default: Configuring and enabling network interfaces... 00:02:02.928 default: SSH address: 192.168.121.97:22 00:02:02.928 default: SSH username: vagrant 00:02:02.928 default: SSH auth method: private key 00:02:04.831 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:12.953 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:18.261 ==> default: Mounting SSHFS shared folder... 00:02:19.198 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:19.456 ==> default: Checking Mount.. 00:02:20.833 ==> default: Folder Successfully Mounted! 00:02:20.833 ==> default: Running provisioner: file... 00:02:21.402 default: ~/.gitconfig => .gitconfig 00:02:21.970 00:02:21.970 SUCCESS! 00:02:21.970 00:02:21.970 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:02:21.970 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:21.970 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:02:21.970 00:02:21.979 [Pipeline] } 00:02:21.994 [Pipeline] // stage 00:02:22.004 [Pipeline] dir 00:02:22.005 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:02:22.007 [Pipeline] { 00:02:22.019 [Pipeline] catchError 00:02:22.021 [Pipeline] { 00:02:22.034 [Pipeline] sh 00:02:22.314 + vagrant ssh-config --host vagrant 00:02:22.314 + sed -ne /^Host/,$p 00:02:22.314 + tee ssh_conf 00:02:26.515 Host vagrant 00:02:26.515 HostName 192.168.121.97 00:02:26.515 User vagrant 00:02:26.515 Port 22 00:02:26.515 UserKnownHostsFile /dev/null 00:02:26.515 StrictHostKeyChecking no 00:02:26.515 PasswordAuthentication no 00:02:26.515 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:26.515 IdentitiesOnly yes 00:02:26.515 LogLevel FATAL 00:02:26.515 ForwardAgent yes 00:02:26.515 ForwardX11 yes 00:02:26.515 00:02:26.530 [Pipeline] withEnv 00:02:26.532 [Pipeline] { 00:02:26.547 [Pipeline] sh 00:02:26.827 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:26.827 source /etc/os-release 00:02:26.827 [[ -e /image.version ]] && img=$(< /image.version) 00:02:26.827 # Minimal, systemd-like check. 00:02:26.827 if [[ -e /.dockerenv ]]; then 00:02:26.827 # Clear garbage from the node's name: 00:02:26.827 # agt-er_autotest_547-896 -> autotest_547-896 00:02:26.827 # $HOSTNAME is the actual container id 00:02:26.827 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:26.827 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:26.827 # We can assume this is a mount from a host where container is running, 00:02:26.827 # so fetch its hostname to easily identify the target swarm worker. 00:02:26.827 container="$(< /etc/hostname) ($agent)" 00:02:26.827 else 00:02:26.827 # Fallback 00:02:26.827 container=$agent 00:02:26.827 fi 00:02:26.827 fi 00:02:26.827 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:26.827 00:02:27.098 [Pipeline] } 00:02:27.115 [Pipeline] // withEnv 00:02:27.123 [Pipeline] setCustomBuildProperty 00:02:27.139 [Pipeline] stage 00:02:27.141 [Pipeline] { (Tests) 00:02:27.160 [Pipeline] sh 00:02:27.441 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:27.716 [Pipeline] sh 00:02:27.995 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:28.270 [Pipeline] timeout 00:02:28.270 Timeout set to expire in 1 hr 0 min 00:02:28.273 [Pipeline] { 00:02:28.287 [Pipeline] sh 00:02:28.584 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:29.156 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:29.169 [Pipeline] sh 00:02:29.449 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:29.720 [Pipeline] sh 00:02:29.999 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:30.273 [Pipeline] sh 00:02:30.553 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:30.811 ++ readlink -f spdk_repo 00:02:30.811 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:30.811 + [[ -n /home/vagrant/spdk_repo ]] 00:02:30.811 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:30.811 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:30.811 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:30.811 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:30.811 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:30.811 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:30.811 + cd /home/vagrant/spdk_repo 00:02:30.811 + source /etc/os-release 00:02:30.811 ++ NAME='Fedora Linux' 00:02:30.811 ++ VERSION='39 (Cloud Edition)' 00:02:30.811 ++ ID=fedora 00:02:30.811 ++ VERSION_ID=39 00:02:30.811 ++ VERSION_CODENAME= 00:02:30.811 ++ PLATFORM_ID=platform:f39 00:02:30.811 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:30.811 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:30.811 ++ LOGO=fedora-logo-icon 00:02:30.811 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:30.811 ++ HOME_URL=https://fedoraproject.org/ 00:02:30.811 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:30.811 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:30.811 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:30.811 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:30.811 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:30.811 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:30.811 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:30.811 ++ SUPPORT_END=2024-11-12 00:02:30.811 ++ VARIANT='Cloud Edition' 00:02:30.811 ++ VARIANT_ID=cloud 00:02:30.811 + uname -a 00:02:30.811 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:30.811 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:30.811 Hugepages 00:02:30.811 node hugesize free / total 00:02:30.811 node0 1048576kB 0 / 0 00:02:30.811 node0 2048kB 0 / 0 00:02:30.811 00:02:30.811 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:30.811 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:30.811 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:30.811 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:31.070 + rm -f /tmp/spdk-ld-path 00:02:31.070 + source autorun-spdk.conf 00:02:31.070 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.070 ++ SPDK_TEST_NVMF=1 00:02:31.070 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:31.070 ++ SPDK_TEST_URING=1 00:02:31.070 ++ SPDK_TEST_USDT=1 00:02:31.070 ++ SPDK_RUN_UBSAN=1 00:02:31.070 ++ NET_TYPE=virt 00:02:31.070 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:31.070 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:31.070 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:31.070 ++ RUN_NIGHTLY=1 00:02:31.070 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:31.070 + [[ -n '' ]] 00:02:31.070 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:31.070 + for M in /var/spdk/build-*-manifest.txt 00:02:31.070 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:31.070 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:31.070 + for M in /var/spdk/build-*-manifest.txt 00:02:31.070 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:31.070 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:31.070 + for M in /var/spdk/build-*-manifest.txt 00:02:31.070 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:31.070 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:31.070 ++ uname 00:02:31.070 + [[ Linux == \L\i\n\u\x ]] 00:02:31.070 + sudo dmesg -T 00:02:31.070 + sudo dmesg --clear 00:02:31.070 + dmesg_pid=5919 00:02:31.070 + [[ Fedora Linux == FreeBSD ]] 00:02:31.070 + sudo dmesg -Tw 00:02:31.070 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.071 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.071 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:31.071 + [[ -x /usr/src/fio-static/fio ]] 00:02:31.071 + export FIO_BIN=/usr/src/fio-static/fio 00:02:31.071 + FIO_BIN=/usr/src/fio-static/fio 00:02:31.071 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:31.071 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:31.071 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:31.071 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:31.071 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:31.071 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:31.071 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:31.071 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:31.071 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:31.071 Test configuration: 00:02:31.071 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.071 SPDK_TEST_NVMF=1 00:02:31.071 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:31.071 SPDK_TEST_URING=1 00:02:31.071 SPDK_TEST_USDT=1 00:02:31.071 SPDK_RUN_UBSAN=1 00:02:31.071 NET_TYPE=virt 00:02:31.071 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:31.071 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:31.071 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:31.071 RUN_NIGHTLY=1 07:11:53 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:31.071 07:11:53 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:31.071 07:11:53 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:31.071 07:11:53 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.071 07:11:53 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.071 07:11:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.071 07:11:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.071 07:11:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.071 07:11:53 -- paths/export.sh@5 -- $ export PATH 00:02:31.071 07:11:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.071 07:11:53 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:31.071 07:11:53 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:31.071 07:11:53 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732777913.XXXXXX 00:02:31.071 07:11:53 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732777913.MWJVcg 00:02:31.071 07:11:53 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:31.071 07:11:53 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:02:31.071 07:11:53 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:31.071 07:11:53 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:31.071 07:11:53 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:31.071 07:11:53 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:31.071 07:11:53 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:31.071 07:11:53 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:31.071 07:11:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.331 07:11:53 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:31.331 07:11:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:31.331 07:11:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:31.331 07:11:53 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:31.331 07:11:53 -- spdk/autobuild.sh@16 -- $ date -u 00:02:31.331 Thu Nov 28 07:11:53 AM UTC 2024 00:02:31.331 07:11:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:31.331 LTS-67-gc13c99a5e 00:02:31.331 07:11:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:31.331 07:11:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:31.331 07:11:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:31.331 07:11:53 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:31.331 07:11:53 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:31.331 07:11:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.331 ************************************ 00:02:31.331 START TEST ubsan 00:02:31.331 ************************************ 00:02:31.331 using ubsan 00:02:31.331 07:11:53 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:31.331 00:02:31.331 real 0m0.000s 00:02:31.331 user 0m0.000s 00:02:31.331 sys 0m0.000s 00:02:31.331 07:11:53 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:31.331 07:11:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.331 ************************************ 00:02:31.331 END TEST ubsan 00:02:31.331 ************************************ 00:02:31.331 07:11:53 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:31.331 07:11:53 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:31.331 07:11:53 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:31.331 07:11:53 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:31.331 07:11:53 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:31.331 07:11:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.331 ************************************ 00:02:31.331 START TEST build_native_dpdk 00:02:31.331 ************************************ 00:02:31.331 07:11:53 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:31.331 07:11:53 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:31.331 07:11:53 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:31.331 07:11:53 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:31.331 07:11:53 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:31.331 07:11:53 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:31.331 07:11:53 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:31.331 07:11:53 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:31.331 07:11:53 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:31.331 07:11:53 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:31.331 07:11:53 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:31.331 07:11:53 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:31.331 07:11:53 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:31.331 07:11:53 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:31.331 07:11:53 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:31.331 07:11:53 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:31.331 07:11:53 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:31.331 07:11:53 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:31.331 07:11:53 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:31.331 07:11:53 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:31.331 07:11:53 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:31.331 eeb0605f11 version: 23.11.0 00:02:31.331 238778122a doc: update release notes for 23.11 00:02:31.331 46aa6b3cfc doc: fix description of RSS features 00:02:31.331 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:31.331 7e421ae345 devtools: support skipping forbid rule check 00:02:31.331 07:11:53 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:31.331 07:11:53 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:31.331 07:11:53 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:31.331 07:11:53 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:31.331 07:11:53 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:31.331 07:11:53 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:31.331 07:11:53 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:31.331 07:11:53 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:31.331 07:11:53 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:31.331 07:11:53 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:31.331 07:11:53 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:31.331 07:11:53 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:31.331 07:11:53 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:31.331 07:11:53 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:31.331 07:11:53 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:31.331 07:11:53 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:31.331 07:11:53 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:31.331 07:11:53 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:31.331 07:11:53 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:31.331 07:11:53 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:31.331 07:11:53 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:31.331 07:11:53 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:31.331 07:11:53 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:31.331 07:11:53 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:31.331 07:11:53 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:31.331 07:11:53 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:31.331 07:11:53 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:31.331 07:11:53 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:31.331 07:11:53 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:31.331 07:11:53 -- scripts/common.sh@343 -- $ case "$op" in 00:02:31.331 07:11:53 -- scripts/common.sh@344 -- $ : 1 00:02:31.331 07:11:53 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:31.331 07:11:53 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.331 07:11:53 -- scripts/common.sh@364 -- $ decimal 23 00:02:31.331 07:11:53 -- scripts/common.sh@352 -- $ local d=23 00:02:31.331 07:11:53 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:31.331 07:11:53 -- scripts/common.sh@354 -- $ echo 23 00:02:31.331 07:11:53 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:31.331 07:11:53 -- scripts/common.sh@365 -- $ decimal 21 00:02:31.331 07:11:53 -- scripts/common.sh@352 -- $ local d=21 00:02:31.331 07:11:53 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:31.331 07:11:53 -- scripts/common.sh@354 -- $ echo 21 00:02:31.331 07:11:53 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:31.331 07:11:53 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:31.331 07:11:53 -- scripts/common.sh@366 -- $ return 1 00:02:31.331 07:11:53 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:31.331 patching file config/rte_config.h 00:02:31.332 Hunk #1 succeeded at 60 (offset 1 line). 00:02:31.332 07:11:53 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:31.332 07:11:53 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:31.332 07:11:53 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:31.332 07:11:53 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:31.332 07:11:53 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:31.332 07:11:53 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:31.332 07:11:53 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:31.332 07:11:53 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:31.332 07:11:53 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:31.332 07:11:53 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:31.332 07:11:53 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:31.332 07:11:53 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:31.332 07:11:53 -- scripts/common.sh@343 -- $ case "$op" in 00:02:31.332 07:11:53 -- scripts/common.sh@344 -- $ : 1 00:02:31.332 07:11:53 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:31.332 07:11:53 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.332 07:11:53 -- scripts/common.sh@364 -- $ decimal 23 00:02:31.332 07:11:53 -- scripts/common.sh@352 -- $ local d=23 00:02:31.332 07:11:53 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:31.332 07:11:53 -- scripts/common.sh@354 -- $ echo 23 00:02:31.332 07:11:53 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:31.332 07:11:53 -- scripts/common.sh@365 -- $ decimal 24 00:02:31.332 07:11:53 -- scripts/common.sh@352 -- $ local d=24 00:02:31.332 07:11:53 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:31.332 07:11:53 -- scripts/common.sh@354 -- $ echo 24 00:02:31.332 07:11:53 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:31.332 07:11:53 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:31.332 07:11:53 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:31.332 07:11:53 -- scripts/common.sh@367 -- $ return 0 00:02:31.332 07:11:53 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:31.332 patching file lib/pcapng/rte_pcapng.c 00:02:31.332 07:11:53 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:31.332 07:11:53 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:31.332 07:11:53 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:31.332 07:11:53 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:31.332 07:11:53 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:37.897 The Meson build system 00:02:37.898 Version: 1.5.0 00:02:37.898 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:37.898 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:37.898 Build type: native build 00:02:37.898 Program cat found: YES (/usr/bin/cat) 00:02:37.898 Project name: DPDK 00:02:37.898 Project version: 23.11.0 00:02:37.898 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:37.898 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:37.898 Host machine cpu family: x86_64 00:02:37.898 Host machine cpu: x86_64 00:02:37.898 Message: ## Building in Developer Mode ## 00:02:37.898 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.898 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:37.898 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.898 Program python3 found: YES (/usr/bin/python3) 00:02:37.898 Program cat found: YES (/usr/bin/cat) 00:02:37.898 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:37.898 Compiler for C supports arguments -march=native: YES 00:02:37.898 Checking for size of "void *" : 8 00:02:37.898 Checking for size of "void *" : 8 (cached) 00:02:37.898 Library m found: YES 00:02:37.898 Library numa found: YES 00:02:37.898 Has header "numaif.h" : YES 00:02:37.898 Library fdt found: NO 00:02:37.898 Library execinfo found: NO 00:02:37.898 Has header "execinfo.h" : YES 00:02:37.898 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:37.898 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.898 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.898 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.898 Run-time dependency openssl found: YES 3.1.1 00:02:37.898 Run-time dependency libpcap found: YES 1.10.4 00:02:37.898 Has header "pcap.h" with dependency libpcap: YES 00:02:37.898 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.898 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.898 Compiler for C supports arguments -Wformat: YES 00:02:37.898 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.898 Compiler for C supports arguments -Wformat-security: NO 00:02:37.898 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.898 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.898 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.898 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.898 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.898 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.898 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.898 Compiler for C supports arguments -Wundef: YES 00:02:37.898 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.898 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.898 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.898 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.898 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.898 Program objdump found: YES (/usr/bin/objdump) 00:02:37.898 Compiler for C supports arguments -mavx512f: YES 00:02:37.898 Checking if "AVX512 checking" compiles: YES 00:02:37.898 Fetching value of define "__SSE4_2__" : 1 00:02:37.898 Fetching value of define "__AES__" : 1 00:02:37.898 Fetching value of define "__AVX__" : 1 00:02:37.898 Fetching value of define "__AVX2__" : 1 00:02:37.898 Fetching value of define "__AVX512BW__" : (undefined) 00:02:37.898 Fetching value of define "__AVX512CD__" : (undefined) 00:02:37.898 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:37.898 Fetching value of define "__AVX512F__" : (undefined) 00:02:37.898 Fetching value of define "__AVX512VL__" : (undefined) 00:02:37.898 Fetching value of define "__PCLMUL__" : 1 00:02:37.898 Fetching value of define "__RDRND__" : 1 00:02:37.898 Fetching value of define "__RDSEED__" : 1 00:02:37.898 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.898 Fetching value of define "__znver1__" : (undefined) 00:02:37.898 Fetching value of define "__znver2__" : (undefined) 00:02:37.898 Fetching value of define "__znver3__" : (undefined) 00:02:37.898 Fetching value of define "__znver4__" : (undefined) 00:02:37.898 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.898 Message: lib/log: Defining dependency "log" 00:02:37.898 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.898 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.898 Checking for function "getentropy" : NO 00:02:37.898 Message: lib/eal: Defining dependency "eal" 00:02:37.898 Message: lib/ring: Defining dependency "ring" 00:02:37.898 Message: lib/rcu: Defining dependency "rcu" 00:02:37.898 Message: lib/mempool: Defining dependency "mempool" 00:02:37.898 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.898 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.898 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.898 Compiler for C supports arguments -mpclmul: YES 00:02:37.898 Compiler for C supports arguments -maes: YES 00:02:37.898 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.898 Compiler for C supports arguments -mavx512bw: YES 00:02:37.898 Compiler for C supports arguments -mavx512dq: YES 00:02:37.898 Compiler for C supports arguments -mavx512vl: YES 00:02:37.898 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.898 Compiler for C supports arguments -mavx2: YES 00:02:37.898 Compiler for C supports arguments -mavx: YES 00:02:37.898 Message: lib/net: Defining dependency "net" 00:02:37.898 Message: lib/meter: Defining dependency "meter" 00:02:37.898 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.898 Message: lib/pci: Defining dependency "pci" 00:02:37.898 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.898 Message: lib/metrics: Defining dependency "metrics" 00:02:37.898 Message: lib/hash: Defining dependency "hash" 00:02:37.898 Message: lib/timer: Defining dependency "timer" 00:02:37.898 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.898 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:37.898 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:37.898 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:37.898 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:37.898 Message: lib/acl: Defining dependency "acl" 00:02:37.898 Message: lib/bbdev: Defining dependency "bbdev" 00:02:37.898 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:37.898 Run-time dependency libelf found: YES 0.191 00:02:37.898 Message: lib/bpf: Defining dependency "bpf" 00:02:37.898 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:37.898 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.898 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.898 Message: lib/distributor: Defining dependency "distributor" 00:02:37.898 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.898 Message: lib/efd: Defining dependency "efd" 00:02:37.898 Message: lib/eventdev: Defining dependency "eventdev" 00:02:37.898 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:37.898 Message: lib/gpudev: Defining dependency "gpudev" 00:02:37.898 Message: lib/gro: Defining dependency "gro" 00:02:37.898 Message: lib/gso: Defining dependency "gso" 00:02:37.898 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:37.898 Message: lib/jobstats: Defining dependency "jobstats" 00:02:37.898 Message: lib/latencystats: Defining dependency "latencystats" 00:02:37.898 Message: lib/lpm: Defining dependency "lpm" 00:02:37.898 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.898 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:37.898 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:37.898 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:37.898 Message: lib/member: Defining dependency "member" 00:02:37.898 Message: lib/pcapng: Defining dependency "pcapng" 00:02:37.898 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.898 Message: lib/power: Defining dependency "power" 00:02:37.898 Message: lib/rawdev: Defining dependency "rawdev" 00:02:37.898 Message: lib/regexdev: Defining dependency "regexdev" 00:02:37.898 Message: lib/mldev: Defining dependency "mldev" 00:02:37.898 Message: lib/rib: Defining dependency "rib" 00:02:37.898 Message: lib/reorder: Defining dependency "reorder" 00:02:37.898 Message: lib/sched: Defining dependency "sched" 00:02:37.898 Message: lib/security: Defining dependency "security" 00:02:37.898 Message: lib/stack: Defining dependency "stack" 00:02:37.898 Has header "linux/userfaultfd.h" : YES 00:02:37.898 Has header "linux/vduse.h" : YES 00:02:37.898 Message: lib/vhost: Defining dependency "vhost" 00:02:37.898 Message: lib/ipsec: Defining dependency "ipsec" 00:02:37.898 Message: lib/pdcp: Defining dependency "pdcp" 00:02:37.898 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.898 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:37.898 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:37.898 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:37.898 Message: lib/fib: Defining dependency "fib" 00:02:37.898 Message: lib/port: Defining dependency "port" 00:02:37.898 Message: lib/pdump: Defining dependency "pdump" 00:02:37.898 Message: lib/table: Defining dependency "table" 00:02:37.898 Message: lib/pipeline: Defining dependency "pipeline" 00:02:37.898 Message: lib/graph: Defining dependency "graph" 00:02:37.898 Message: lib/node: Defining dependency "node" 00:02:37.898 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:38.836 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:38.836 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:38.836 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:38.836 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:38.836 Compiler for C supports arguments -Wno-unused-value: YES 00:02:38.836 Compiler for C supports arguments -Wno-format: YES 00:02:38.836 Compiler for C supports arguments -Wno-format-security: YES 00:02:38.836 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:38.836 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:38.836 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:38.836 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:38.836 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:38.836 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:38.836 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:38.836 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:38.836 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:38.836 Has header "sys/epoll.h" : YES 00:02:38.836 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:38.836 Configuring doxy-api-html.conf using configuration 00:02:38.836 Configuring doxy-api-man.conf using configuration 00:02:38.836 Program mandb found: YES (/usr/bin/mandb) 00:02:38.836 Program sphinx-build found: NO 00:02:38.836 Configuring rte_build_config.h using configuration 00:02:38.836 Message: 00:02:38.836 ================= 00:02:38.836 Applications Enabled 00:02:38.836 ================= 00:02:38.836 00:02:38.836 apps: 00:02:38.836 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:38.836 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:38.836 test-pmd, test-regex, test-sad, test-security-perf, 00:02:38.836 00:02:38.836 Message: 00:02:38.836 ================= 00:02:38.836 Libraries Enabled 00:02:38.836 ================= 00:02:38.836 00:02:38.836 libs: 00:02:38.836 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:38.836 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:38.836 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:38.836 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:38.836 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:38.836 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:38.836 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:38.836 00:02:38.836 00:02:38.836 Message: 00:02:38.836 =============== 00:02:38.837 Drivers Enabled 00:02:38.837 =============== 00:02:38.837 00:02:38.837 common: 00:02:38.837 00:02:38.837 bus: 00:02:38.837 pci, vdev, 00:02:38.837 mempool: 00:02:38.837 ring, 00:02:38.837 dma: 00:02:38.837 00:02:38.837 net: 00:02:38.837 i40e, 00:02:38.837 raw: 00:02:38.837 00:02:38.837 crypto: 00:02:38.837 00:02:38.837 compress: 00:02:38.837 00:02:38.837 regex: 00:02:38.837 00:02:38.837 ml: 00:02:38.837 00:02:38.837 vdpa: 00:02:38.837 00:02:38.837 event: 00:02:38.837 00:02:38.837 baseband: 00:02:38.837 00:02:38.837 gpu: 00:02:38.837 00:02:38.837 00:02:38.837 Message: 00:02:38.837 ================= 00:02:38.837 Content Skipped 00:02:38.837 ================= 00:02:38.837 00:02:38.837 apps: 00:02:38.837 00:02:38.837 libs: 00:02:38.837 00:02:38.837 drivers: 00:02:38.837 common/cpt: not in enabled drivers build config 00:02:38.837 common/dpaax: not in enabled drivers build config 00:02:38.837 common/iavf: not in enabled drivers build config 00:02:38.837 common/idpf: not in enabled drivers build config 00:02:38.837 common/mvep: not in enabled drivers build config 00:02:38.837 common/octeontx: not in enabled drivers build config 00:02:38.837 bus/auxiliary: not in enabled drivers build config 00:02:38.837 bus/cdx: not in enabled drivers build config 00:02:38.837 bus/dpaa: not in enabled drivers build config 00:02:38.837 bus/fslmc: not in enabled drivers build config 00:02:38.837 bus/ifpga: not in enabled drivers build config 00:02:38.837 bus/platform: not in enabled drivers build config 00:02:38.837 bus/vmbus: not in enabled drivers build config 00:02:38.837 common/cnxk: not in enabled drivers build config 00:02:38.837 common/mlx5: not in enabled drivers build config 00:02:38.837 common/nfp: not in enabled drivers build config 00:02:38.837 common/qat: not in enabled drivers build config 00:02:38.837 common/sfc_efx: not in enabled drivers build config 00:02:38.837 mempool/bucket: not in enabled drivers build config 00:02:38.837 mempool/cnxk: not in enabled drivers build config 00:02:38.837 mempool/dpaa: not in enabled drivers build config 00:02:38.837 mempool/dpaa2: not in enabled drivers build config 00:02:38.837 mempool/octeontx: not in enabled drivers build config 00:02:38.837 mempool/stack: not in enabled drivers build config 00:02:38.837 dma/cnxk: not in enabled drivers build config 00:02:38.837 dma/dpaa: not in enabled drivers build config 00:02:38.837 dma/dpaa2: not in enabled drivers build config 00:02:38.837 dma/hisilicon: not in enabled drivers build config 00:02:38.837 dma/idxd: not in enabled drivers build config 00:02:38.837 dma/ioat: not in enabled drivers build config 00:02:38.837 dma/skeleton: not in enabled drivers build config 00:02:38.837 net/af_packet: not in enabled drivers build config 00:02:38.837 net/af_xdp: not in enabled drivers build config 00:02:38.837 net/ark: not in enabled drivers build config 00:02:38.837 net/atlantic: not in enabled drivers build config 00:02:38.837 net/avp: not in enabled drivers build config 00:02:38.837 net/axgbe: not in enabled drivers build config 00:02:38.837 net/bnx2x: not in enabled drivers build config 00:02:38.837 net/bnxt: not in enabled drivers build config 00:02:38.837 net/bonding: not in enabled drivers build config 00:02:38.837 net/cnxk: not in enabled drivers build config 00:02:38.837 net/cpfl: not in enabled drivers build config 00:02:38.837 net/cxgbe: not in enabled drivers build config 00:02:38.837 net/dpaa: not in enabled drivers build config 00:02:38.837 net/dpaa2: not in enabled drivers build config 00:02:38.837 net/e1000: not in enabled drivers build config 00:02:38.837 net/ena: not in enabled drivers build config 00:02:38.837 net/enetc: not in enabled drivers build config 00:02:38.837 net/enetfec: not in enabled drivers build config 00:02:38.837 net/enic: not in enabled drivers build config 00:02:38.837 net/failsafe: not in enabled drivers build config 00:02:38.837 net/fm10k: not in enabled drivers build config 00:02:38.837 net/gve: not in enabled drivers build config 00:02:38.837 net/hinic: not in enabled drivers build config 00:02:38.837 net/hns3: not in enabled drivers build config 00:02:38.837 net/iavf: not in enabled drivers build config 00:02:38.837 net/ice: not in enabled drivers build config 00:02:38.837 net/idpf: not in enabled drivers build config 00:02:38.837 net/igc: not in enabled drivers build config 00:02:38.837 net/ionic: not in enabled drivers build config 00:02:38.837 net/ipn3ke: not in enabled drivers build config 00:02:38.837 net/ixgbe: not in enabled drivers build config 00:02:38.837 net/mana: not in enabled drivers build config 00:02:38.837 net/memif: not in enabled drivers build config 00:02:38.837 net/mlx4: not in enabled drivers build config 00:02:38.837 net/mlx5: not in enabled drivers build config 00:02:38.837 net/mvneta: not in enabled drivers build config 00:02:38.837 net/mvpp2: not in enabled drivers build config 00:02:38.837 net/netvsc: not in enabled drivers build config 00:02:38.837 net/nfb: not in enabled drivers build config 00:02:38.837 net/nfp: not in enabled drivers build config 00:02:38.837 net/ngbe: not in enabled drivers build config 00:02:38.837 net/null: not in enabled drivers build config 00:02:38.837 net/octeontx: not in enabled drivers build config 00:02:38.837 net/octeon_ep: not in enabled drivers build config 00:02:38.837 net/pcap: not in enabled drivers build config 00:02:38.837 net/pfe: not in enabled drivers build config 00:02:38.837 net/qede: not in enabled drivers build config 00:02:38.837 net/ring: not in enabled drivers build config 00:02:38.837 net/sfc: not in enabled drivers build config 00:02:38.837 net/softnic: not in enabled drivers build config 00:02:38.837 net/tap: not in enabled drivers build config 00:02:38.837 net/thunderx: not in enabled drivers build config 00:02:38.837 net/txgbe: not in enabled drivers build config 00:02:38.837 net/vdev_netvsc: not in enabled drivers build config 00:02:38.837 net/vhost: not in enabled drivers build config 00:02:38.837 net/virtio: not in enabled drivers build config 00:02:38.837 net/vmxnet3: not in enabled drivers build config 00:02:38.837 raw/cnxk_bphy: not in enabled drivers build config 00:02:38.837 raw/cnxk_gpio: not in enabled drivers build config 00:02:38.837 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:38.837 raw/ifpga: not in enabled drivers build config 00:02:38.837 raw/ntb: not in enabled drivers build config 00:02:38.837 raw/skeleton: not in enabled drivers build config 00:02:38.837 crypto/armv8: not in enabled drivers build config 00:02:38.837 crypto/bcmfs: not in enabled drivers build config 00:02:38.837 crypto/caam_jr: not in enabled drivers build config 00:02:38.837 crypto/ccp: not in enabled drivers build config 00:02:38.837 crypto/cnxk: not in enabled drivers build config 00:02:38.837 crypto/dpaa_sec: not in enabled drivers build config 00:02:38.837 crypto/dpaa2_sec: not in enabled drivers build config 00:02:38.837 crypto/ipsec_mb: not in enabled drivers build config 00:02:38.837 crypto/mlx5: not in enabled drivers build config 00:02:38.837 crypto/mvsam: not in enabled drivers build config 00:02:38.837 crypto/nitrox: not in enabled drivers build config 00:02:38.837 crypto/null: not in enabled drivers build config 00:02:38.837 crypto/octeontx: not in enabled drivers build config 00:02:38.837 crypto/openssl: not in enabled drivers build config 00:02:38.837 crypto/scheduler: not in enabled drivers build config 00:02:38.837 crypto/uadk: not in enabled drivers build config 00:02:38.837 crypto/virtio: not in enabled drivers build config 00:02:38.837 compress/isal: not in enabled drivers build config 00:02:38.837 compress/mlx5: not in enabled drivers build config 00:02:38.837 compress/octeontx: not in enabled drivers build config 00:02:38.837 compress/zlib: not in enabled drivers build config 00:02:38.837 regex/mlx5: not in enabled drivers build config 00:02:38.837 regex/cn9k: not in enabled drivers build config 00:02:38.837 ml/cnxk: not in enabled drivers build config 00:02:38.837 vdpa/ifc: not in enabled drivers build config 00:02:38.837 vdpa/mlx5: not in enabled drivers build config 00:02:38.837 vdpa/nfp: not in enabled drivers build config 00:02:38.837 vdpa/sfc: not in enabled drivers build config 00:02:38.837 event/cnxk: not in enabled drivers build config 00:02:38.837 event/dlb2: not in enabled drivers build config 00:02:38.837 event/dpaa: not in enabled drivers build config 00:02:38.837 event/dpaa2: not in enabled drivers build config 00:02:38.837 event/dsw: not in enabled drivers build config 00:02:38.837 event/opdl: not in enabled drivers build config 00:02:38.837 event/skeleton: not in enabled drivers build config 00:02:38.837 event/sw: not in enabled drivers build config 00:02:38.837 event/octeontx: not in enabled drivers build config 00:02:38.837 baseband/acc: not in enabled drivers build config 00:02:38.837 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:38.837 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:38.837 baseband/la12xx: not in enabled drivers build config 00:02:38.837 baseband/null: not in enabled drivers build config 00:02:38.837 baseband/turbo_sw: not in enabled drivers build config 00:02:38.837 gpu/cuda: not in enabled drivers build config 00:02:38.837 00:02:38.837 00:02:38.837 Build targets in project: 220 00:02:38.837 00:02:38.837 DPDK 23.11.0 00:02:38.837 00:02:38.837 User defined options 00:02:38.837 libdir : lib 00:02:38.837 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:38.837 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:38.837 c_link_args : 00:02:38.837 enable_docs : false 00:02:38.837 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:38.837 enable_kmods : false 00:02:38.837 machine : native 00:02:38.837 tests : false 00:02:38.837 00:02:38.837 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.837 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:39.096 07:12:01 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:39.096 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:39.355 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:39.355 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.355 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.355 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.355 [5/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:39.355 [6/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.355 [7/710] Linking static target lib/librte_kvargs.a 00:02:39.355 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:39.355 [9/710] Linking static target lib/librte_log.a 00:02:39.355 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.613 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.872 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.872 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.872 [14/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.872 [15/710] Linking target lib/librte_log.so.24.0 00:02:39.872 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.872 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.872 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:40.130 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:40.389 [20/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:40.389 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:40.389 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:40.389 [23/710] Linking target lib/librte_kvargs.so.24.0 00:02:40.389 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:40.389 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:40.389 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:40.648 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:40.648 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:40.648 [29/710] Linking static target lib/librte_telemetry.a 00:02:40.648 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:40.648 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.648 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.906 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.906 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.906 [35/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.906 [36/710] Linking target lib/librte_telemetry.so.24.0 00:02:40.906 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.906 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:41.164 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:41.164 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:41.164 [41/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:41.164 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:41.164 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:41.164 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:41.422 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:41.422 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:41.680 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:41.680 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:41.680 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:41.680 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:41.680 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:41.938 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:41.938 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:41.938 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:41.938 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.938 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:42.196 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.196 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:42.196 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.196 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:42.196 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:42.196 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:42.454 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:42.455 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:42.455 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:42.455 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:42.455 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.455 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.738 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.738 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:42.738 [71/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.001 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:43.001 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.001 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.001 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:43.001 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.001 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:43.268 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.268 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:43.526 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:43.526 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.526 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:43.526 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.526 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.526 [85/710] Linking static target lib/librte_ring.a 00:02:43.785 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:43.785 [87/710] Linking static target lib/librte_eal.a 00:02:43.785 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.785 [89/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:43.785 [90/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.043 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.043 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.043 [93/710] Linking static target lib/librte_mempool.a 00:02:44.043 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.043 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.301 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.301 [97/710] Linking static target lib/librte_rcu.a 00:02:44.560 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:44.560 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:44.560 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.560 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.560 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.819 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.819 [104/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.819 [105/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.819 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.819 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.819 [108/710] Linking static target lib/librte_mbuf.a 00:02:45.079 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:45.079 [110/710] Linking static target lib/librte_net.a 00:02:45.338 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.338 [112/710] Linking static target lib/librte_meter.a 00:02:45.338 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.338 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:45.596 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.596 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.596 [117/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.596 [118/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.596 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:46.164 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:46.164 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:46.424 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:46.424 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:46.424 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:46.682 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:46.682 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:46.682 [127/710] Linking static target lib/librte_pci.a 00:02:46.682 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:46.682 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:46.941 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.941 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:46.941 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:46.941 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:46.941 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:46.941 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:46.941 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:46.941 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:46.941 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:46.941 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:46.941 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:47.200 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:47.200 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:47.458 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:47.458 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:47.458 [145/710] Linking static target lib/librte_cmdline.a 00:02:47.718 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:47.718 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:47.718 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:47.718 [149/710] Linking static target lib/librte_metrics.a 00:02:47.718 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:47.975 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.233 [152/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:48.233 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:48.233 [154/710] Linking static target lib/librte_timer.a 00:02:48.233 [155/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.492 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.062 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:49.062 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:49.062 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:49.062 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:49.641 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:49.641 [162/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:49.641 [163/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.912 [164/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:49.912 [165/710] Linking static target lib/librte_bitratestats.a 00:02:49.912 [166/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:49.912 [167/710] Linking target lib/librte_eal.so.24.0 00:02:49.912 [168/710] Linking static target lib/librte_ethdev.a 00:02:49.912 [169/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:49.912 [170/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.912 [171/710] Linking target lib/librte_ring.so.24.0 00:02:49.912 [172/710] Linking target lib/librte_meter.so.24.0 00:02:49.913 [173/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:50.170 [174/710] Linking target lib/librte_pci.so.24.0 00:02:50.170 [175/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:50.170 [176/710] Linking static target lib/librte_hash.a 00:02:50.170 [177/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:50.170 [178/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:50.170 [179/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:50.170 [180/710] Linking target lib/librte_rcu.so.24.0 00:02:50.170 [181/710] Linking target lib/librte_mempool.so.24.0 00:02:50.170 [182/710] Linking target lib/librte_timer.so.24.0 00:02:50.429 [183/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:50.429 [184/710] Linking static target lib/acl/libavx2_tmp.a 00:02:50.429 [185/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:50.429 [186/710] Linking static target lib/librte_bbdev.a 00:02:50.429 [187/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:50.429 [188/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:50.429 [189/710] Linking target lib/librte_mbuf.so.24.0 00:02:50.429 [190/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:50.429 [191/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:50.429 [192/710] Linking target lib/librte_net.so.24.0 00:02:50.688 [193/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:50.688 [194/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:50.688 [195/710] Linking static target lib/acl/libavx512_tmp.a 00:02:50.688 [196/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:50.688 [197/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.688 [198/710] Linking target lib/librte_hash.so.24.0 00:02:50.688 [199/710] Linking target lib/librte_cmdline.so.24.0 00:02:50.947 [200/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:50.947 [201/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:50.947 [202/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.947 [203/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:50.947 [204/710] Linking static target lib/librte_acl.a 00:02:50.947 [205/710] Linking target lib/librte_bbdev.so.24.0 00:02:50.947 [206/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:51.206 [207/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:51.206 [208/710] Linking static target lib/librte_cfgfile.a 00:02:51.206 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.464 [210/710] Linking target lib/librte_acl.so.24.0 00:02:51.464 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:51.464 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:51.464 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:51.464 [214/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.723 [215/710] Linking target lib/librte_cfgfile.so.24.0 00:02:51.723 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:51.723 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:51.723 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.981 [219/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:51.981 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:51.981 [221/710] Linking static target lib/librte_bpf.a 00:02:51.981 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.240 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.240 [224/710] Linking static target lib/librte_compressdev.a 00:02:52.240 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.240 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.498 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:52.498 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:52.757 [229/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.758 [230/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:52.758 [231/710] Linking static target lib/librte_distributor.a 00:02:52.758 [232/710] Linking target lib/librte_compressdev.so.24.0 00:02:52.758 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:53.015 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.015 [235/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:53.015 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:53.015 [237/710] Linking target lib/librte_distributor.so.24.0 00:02:53.015 [238/710] Linking static target lib/librte_dmadev.a 00:02:53.273 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.273 [240/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:53.273 [241/710] Linking target lib/librte_dmadev.so.24.0 00:02:53.531 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:53.790 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:53.790 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:54.089 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:54.089 [246/710] Linking static target lib/librte_efd.a 00:02:54.089 [247/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:54.089 [248/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:54.089 [249/710] Linking static target lib/librte_cryptodev.a 00:02:54.089 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.348 [251/710] Linking target lib/librte_efd.so.24.0 00:02:54.608 [252/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:54.608 [253/710] Linking static target lib/librte_dispatcher.a 00:02:54.608 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:54.868 [255/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:54.868 [256/710] Linking static target lib/librte_gpudev.a 00:02:54.868 [257/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.868 [258/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:54.868 [259/710] Linking target lib/librte_ethdev.so.24.0 00:02:54.868 [260/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:54.868 [261/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:55.127 [262/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.127 [263/710] Linking target lib/librte_metrics.so.24.0 00:02:55.127 [264/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:55.127 [265/710] Linking target lib/librte_bpf.so.24.0 00:02:55.127 [266/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:55.127 [267/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:55.127 [268/710] Linking target lib/librte_bitratestats.so.24.0 00:02:55.385 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:55.385 [270/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:55.385 [271/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.385 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:02:55.645 [273/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.645 [274/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:55.645 [275/710] Linking target lib/librte_gpudev.so.24.0 00:02:55.645 [276/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:55.645 [277/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:55.904 [278/710] Linking static target lib/librte_eventdev.a 00:02:55.904 [279/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:55.904 [280/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:55.904 [281/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:55.904 [282/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:55.904 [283/710] Linking static target lib/librte_gro.a 00:02:56.163 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:56.163 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:56.163 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.163 [287/710] Linking target lib/librte_gro.so.24.0 00:02:56.163 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:56.422 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:56.422 [290/710] Linking static target lib/librte_gso.a 00:02:56.422 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.422 [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:56.422 [293/710] Linking target lib/librte_gso.so.24.0 00:02:56.680 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:56.680 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:56.680 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:56.939 [297/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:56.939 [298/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:56.939 [299/710] Linking static target lib/librte_jobstats.a 00:02:56.939 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:56.939 [301/710] Linking static target lib/librte_ip_frag.a 00:02:57.207 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:57.207 [303/710] Linking static target lib/librte_latencystats.a 00:02:57.207 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.207 [305/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.207 [306/710] Linking target lib/librte_jobstats.so.24.0 00:02:57.207 [307/710] Linking target lib/librte_ip_frag.so.24.0 00:02:57.480 [308/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:57.480 [309/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.480 [310/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:57.480 [311/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:57.480 [312/710] Linking target lib/librte_latencystats.so.24.0 00:02:57.480 [313/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:57.480 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:57.480 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:57.480 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:57.480 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:57.739 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.999 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:57.999 [320/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:57.999 [321/710] Linking static target lib/librte_lpm.a 00:02:57.999 [322/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:57.999 [323/710] Linking target lib/librte_dispatcher.so.24.0 00:02:57.999 [324/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:57.999 [325/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:58.258 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:58.258 [327/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.258 [328/710] Linking target lib/librte_lpm.so.24.0 00:02:58.258 [329/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:58.258 [330/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:58.258 [331/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:58.258 [332/710] Linking static target lib/librte_pcapng.a 00:02:58.517 [333/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:58.517 [334/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:58.517 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.517 [336/710] Linking target lib/librte_pcapng.so.24.0 00:02:58.776 [337/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:58.776 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:58.776 [339/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:59.035 [340/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:59.035 [341/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:59.035 [342/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:59.035 [343/710] Linking static target lib/librte_power.a 00:02:59.035 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:59.035 [345/710] Linking static target lib/librte_regexdev.a 00:02:59.035 [346/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:59.035 [347/710] Linking static target lib/librte_rawdev.a 00:02:59.294 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:59.294 [349/710] Linking static target lib/librte_member.a 00:02:59.294 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:59.294 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:59.552 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:59.552 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.552 [354/710] Linking target lib/librte_member.so.24.0 00:02:59.552 [355/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.552 [356/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:59.552 [357/710] Linking static target lib/librte_mldev.a 00:02:59.552 [358/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.552 [359/710] Linking target lib/librte_rawdev.so.24.0 00:02:59.812 [360/710] Linking target lib/librte_power.so.24.0 00:02:59.812 [361/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:59.812 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:59.812 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.812 [364/710] Linking target lib/librte_regexdev.so.24.0 00:03:00.070 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:00.071 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:00.071 [367/710] Linking static target lib/librte_reorder.a 00:03:00.071 [368/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:00.071 [369/710] Linking static target lib/librte_rib.a 00:03:00.328 [370/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:00.328 [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:00.328 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:00.328 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:00.586 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.586 [375/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:00.586 [376/710] Linking static target lib/librte_stack.a 00:03:00.586 [377/710] Linking target lib/librte_reorder.so.24.0 00:03:00.586 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:00.586 [379/710] Linking static target lib/librte_security.a 00:03:00.586 [380/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.586 [381/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:00.586 [382/710] Linking target lib/librte_rib.so.24.0 00:03:00.586 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.846 [384/710] Linking target lib/librte_stack.so.24.0 00:03:00.846 [385/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:00.846 [386/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.105 [387/710] Linking target lib/librte_mldev.so.24.0 00:03:01.105 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.105 [389/710] Linking target lib/librte_security.so.24.0 00:03:01.105 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:01.105 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:01.105 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:01.365 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:01.365 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:01.365 [395/710] Linking static target lib/librte_sched.a 00:03:01.624 [396/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.624 [397/710] Linking target lib/librte_sched.so.24.0 00:03:01.884 [398/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:01.884 [399/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:01.884 [400/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:01.884 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:02.142 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:02.402 [403/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:02.402 [404/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:02.661 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:02.661 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:02.661 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:02.920 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:02.920 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:03.178 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:03.178 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:03.178 [412/710] Linking static target lib/librte_ipsec.a 00:03:03.178 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:03.460 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:03.460 [415/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.460 [416/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:03.460 [417/710] Linking target lib/librte_ipsec.so.24.0 00:03:03.460 [418/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:03.719 [419/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:03.719 [420/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:03.719 [421/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:03.719 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:03.719 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:04.290 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:04.550 [425/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:04.550 [426/710] Linking static target lib/librte_pdcp.a 00:03:04.550 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:04.550 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:04.550 [429/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:04.550 [430/710] Linking static target lib/librte_fib.a 00:03:04.550 [431/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:04.550 [432/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:04.809 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.809 [434/710] Linking target lib/librte_pdcp.so.24.0 00:03:04.809 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.069 [436/710] Linking target lib/librte_fib.so.24.0 00:03:05.069 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:05.639 [438/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:05.639 [439/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:05.639 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:05.639 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:05.897 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:05.897 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:06.158 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:06.158 [445/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:06.158 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:06.158 [447/710] Linking static target lib/librte_port.a 00:03:06.418 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:06.418 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:06.418 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:06.418 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:06.418 [452/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:06.676 [453/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.676 [454/710] Linking target lib/librte_port.so.24.0 00:03:06.676 [455/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:06.676 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:06.676 [457/710] Linking static target lib/librte_pdump.a 00:03:06.936 [458/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:06.936 [459/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:06.936 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.196 [461/710] Linking target lib/librte_pdump.so.24.0 00:03:07.196 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:07.455 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:07.455 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:07.714 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:07.714 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:07.714 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:07.714 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:07.973 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:07.973 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:08.232 [471/710] Linking static target lib/librte_table.a 00:03:08.232 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:08.232 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:08.799 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:08.799 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.799 [476/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:08.799 [477/710] Linking target lib/librte_table.so.24.0 00:03:09.057 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:09.057 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:09.057 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:09.316 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:09.575 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:09.575 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:09.575 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:09.575 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:09.834 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:10.093 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:10.352 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:10.352 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:10.352 [490/710] Linking static target lib/librte_graph.a 00:03:10.352 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:10.352 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:10.611 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:10.871 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.871 [495/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:10.871 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:10.871 [497/710] Linking target lib/librte_graph.so.24.0 00:03:11.130 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:11.130 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:11.390 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:11.649 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:11.649 [502/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:11.649 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:11.649 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:11.649 [505/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:11.908 [506/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:12.168 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:12.168 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:12.427 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:12.427 [510/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:12.427 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:12.427 [512/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:12.427 [513/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:12.427 [514/710] Linking static target lib/librte_node.a 00:03:12.427 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:12.686 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.946 [517/710] Linking target lib/librte_node.so.24.0 00:03:12.946 [518/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:12.946 [519/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:12.946 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:12.946 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:13.205 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:13.205 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:13.205 [524/710] Linking static target drivers/librte_bus_vdev.a 00:03:13.205 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:13.205 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:13.205 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:13.463 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.463 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:13.463 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:13.463 [531/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:13.463 [532/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:13.463 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:13.463 [534/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:13.720 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:13.720 [536/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.720 [537/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:13.720 [538/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:13.720 [539/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:13.720 [540/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:13.977 [541/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:13.977 [542/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:13.977 [543/710] Linking static target drivers/librte_mempool_ring.a 00:03:13.977 [544/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:13.977 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:13.977 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:14.545 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:14.545 [548/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:14.804 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:14.804 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:14.804 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:15.741 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:15.741 [553/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:15.741 [554/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:15.741 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:15.741 [556/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:15.741 [557/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:16.310 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:16.310 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:16.569 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:16.569 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:16.569 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:17.137 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:17.396 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:17.396 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:17.396 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:18.010 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:18.010 [568/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:18.010 [569/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:18.010 [570/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:18.010 [571/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:18.010 [572/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:18.270 [573/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:18.529 [574/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:18.529 [575/710] Linking static target lib/librte_vhost.a 00:03:18.529 [576/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:18.788 [577/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:18.788 [578/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:18.788 [579/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:18.788 [580/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:18.788 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:18.788 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:19.047 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:19.307 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:19.307 [585/710] Linking static target drivers/librte_net_i40e.a 00:03:19.307 [586/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:19.307 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:19.566 [588/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:19.566 [589/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:19.566 [590/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:19.566 [591/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:19.566 [592/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:19.566 [593/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.566 [594/710] Linking target lib/librte_vhost.so.24.0 00:03:19.824 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.083 [596/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:20.083 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:20.083 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:20.083 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:20.651 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:20.651 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:20.910 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:20.910 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:20.910 [604/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:20.910 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:20.910 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:21.169 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:21.427 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:21.686 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:21.686 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:21.686 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:21.686 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:21.946 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:21.946 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:21.946 [615/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:21.946 [616/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:21.946 [617/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:22.205 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:22.464 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:22.464 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:22.722 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:22.722 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:22.722 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:22.986 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:22.986 [625/710] Linking static target lib/librte_pipeline.a 00:03:23.921 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:23.921 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:23.921 [628/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:23.921 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:23.921 [630/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:23.921 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:24.180 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:24.180 [633/710] Linking target app/dpdk-dumpcap 00:03:24.439 [634/710] Linking target app/dpdk-graph 00:03:24.440 [635/710] Linking target app/dpdk-pdump 00:03:24.440 [636/710] Linking target app/dpdk-proc-info 00:03:24.440 [637/710] Linking target app/dpdk-test-acl 00:03:24.440 [638/710] Linking target app/dpdk-test-cmdline 00:03:24.440 [639/710] Linking target app/dpdk-test-compress-perf 00:03:24.698 [640/710] Linking target app/dpdk-test-crypto-perf 00:03:24.698 [641/710] Linking target app/dpdk-test-fib 00:03:24.957 [642/710] Linking target app/dpdk-test-dma-perf 00:03:24.957 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:24.957 [644/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:25.217 [645/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:25.217 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:25.217 [647/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:25.217 [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:25.785 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:25.786 [650/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:25.786 [651/710] Linking target app/dpdk-test-gpudev 00:03:25.786 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:25.786 [653/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.786 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:25.786 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:25.786 [656/710] Linking target lib/librte_pipeline.so.24.0 00:03:26.044 [657/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:26.044 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:26.044 [659/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:26.044 [660/710] Linking target app/dpdk-test-eventdev 00:03:26.336 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:26.336 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:26.336 [663/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:26.336 [664/710] Linking target app/dpdk-test-flow-perf 00:03:26.594 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:26.594 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:26.594 [667/710] Linking target app/dpdk-test-bbdev 00:03:26.594 [668/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:26.853 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:27.112 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:27.112 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:27.112 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:27.112 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:27.370 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:27.370 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:27.628 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:27.628 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:27.887 [678/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:27.887 [679/710] Linking target app/dpdk-test-pipeline 00:03:27.887 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:28.146 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:28.146 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:28.406 [683/710] Linking target app/dpdk-test-mldev 00:03:28.665 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:28.665 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:28.923 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:28.923 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:28.923 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:29.183 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:29.183 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:29.442 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:29.442 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:29.442 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:30.010 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:30.010 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:30.011 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:30.578 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:30.578 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:30.578 [699/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:30.578 [700/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:30.578 [701/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:30.865 [702/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:30.865 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:31.123 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:31.123 [705/710] Linking target app/dpdk-test-sad 00:03:31.123 [706/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:31.123 [707/710] Linking target app/dpdk-test-regex 00:03:31.380 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:31.637 [709/710] Linking target app/dpdk-testpmd 00:03:31.895 [710/710] Linking target app/dpdk-test-security-perf 00:03:31.895 07:12:53 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:31.895 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:31.895 [0/1] Installing files. 00:03:32.155 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.155 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.156 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.157 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.416 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.416 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.416 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.416 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.416 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.417 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:32.418 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:32.418 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.418 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.419 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.678 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.678 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.678 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.678 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.678 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.678 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.678 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.678 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.678 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.678 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:32.678 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.678 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.679 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.679 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.679 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.679 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.679 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.679 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.679 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.679 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.679 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.679 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.679 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.939 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:32.940 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:32.940 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:32.940 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:32.940 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:32.940 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:32.940 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:32.940 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:32.940 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:32.940 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:32.940 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:32.940 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:32.940 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:32.940 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:32.940 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:32.940 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:32.940 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:32.940 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:32.940 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:32.940 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:32.940 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:32.940 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:32.940 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:32.940 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:32.940 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:32.940 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:32.940 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:32.940 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:32.940 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:32.940 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:32.940 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:32.940 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:32.940 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:32.940 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:32.940 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:32.940 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:32.940 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:32.940 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:32.940 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:32.940 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:32.940 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:32.940 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:32.940 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:32.940 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:32.940 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:32.940 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:32.940 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:32.940 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:32.940 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:32.940 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:32.940 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:32.940 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:32.940 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:32.941 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:32.941 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:32.941 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:32.941 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:32.941 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:32.941 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:32.941 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:32.941 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:32.941 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:32.941 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:32.941 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:32.941 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:32.941 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:32.941 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:32.941 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:32.941 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:32.941 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:32.941 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:32.941 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:32.941 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:32.941 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:32.941 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:32.941 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:32.941 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:32.941 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:32.941 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:32.941 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:32.941 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:32.941 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:32.941 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:32.941 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:32.941 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:32.941 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:32.941 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:32.941 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:32.941 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:32.941 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:32.941 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:32.941 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:32.941 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:32.941 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:32.941 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:32.941 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:32.941 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:32.941 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:32.941 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:32.941 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:32.941 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:32.941 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:32.941 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:32.941 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:32.941 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:32.941 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:32.941 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:32.941 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:32.941 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:32.941 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:32.941 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:32.941 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:32.941 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:32.941 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:32.941 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:32.941 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:32.941 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:32.941 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:32.941 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:32.941 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:32.941 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:32.941 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:32.941 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:32.941 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:32.941 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:32.941 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:32.941 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:32.941 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:32.941 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:32.941 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:32.941 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:32.941 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:32.941 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:32.941 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:32.941 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:32.941 07:12:55 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:32.941 ************************************ 00:03:32.941 END TEST build_native_dpdk 00:03:32.941 ************************************ 00:03:32.941 07:12:55 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:32.941 07:12:55 -- common/autobuild_common.sh@203 -- $ cat 00:03:32.941 07:12:55 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:32.941 00:03:32.941 real 1m1.675s 00:03:32.941 user 7m24.573s 00:03:32.941 sys 1m11.920s 00:03:32.941 07:12:55 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:32.941 07:12:55 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.941 07:12:55 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:32.941 07:12:55 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:32.941 07:12:55 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:32.941 07:12:55 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:32.941 07:12:55 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:32.941 07:12:55 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:32.941 07:12:55 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:32.941 07:12:55 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:33.198 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:33.198 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.198 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:33.198 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:33.764 Using 'verbs' RDMA provider 00:03:46.624 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:04:01.544 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:04:01.544 Creating mk/config.mk...done. 00:04:01.544 Creating mk/cc.flags.mk...done. 00:04:01.544 Type 'make' to build. 00:04:01.544 07:13:21 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:01.544 07:13:21 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:04:01.544 07:13:21 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:04:01.544 07:13:21 -- common/autotest_common.sh@10 -- $ set +x 00:04:01.544 ************************************ 00:04:01.544 START TEST make 00:04:01.544 ************************************ 00:04:01.544 07:13:21 -- common/autotest_common.sh@1114 -- $ make -j10 00:04:01.544 make[1]: Nothing to be done for 'all'. 00:04:23.487 CC lib/ut_mock/mock.o 00:04:23.487 CC lib/ut/ut.o 00:04:23.487 CC lib/log/log.o 00:04:23.487 CC lib/log/log_flags.o 00:04:23.487 CC lib/log/log_deprecated.o 00:04:23.487 LIB libspdk_ut_mock.a 00:04:23.487 SO libspdk_ut_mock.so.5.0 00:04:23.487 LIB libspdk_log.a 00:04:23.487 LIB libspdk_ut.a 00:04:23.487 SO libspdk_log.so.6.1 00:04:23.487 SYMLINK libspdk_ut_mock.so 00:04:23.487 SO libspdk_ut.so.1.0 00:04:23.487 SYMLINK libspdk_log.so 00:04:23.487 SYMLINK libspdk_ut.so 00:04:23.487 CC lib/util/base64.o 00:04:23.487 CC lib/util/bit_array.o 00:04:23.487 CC lib/util/cpuset.o 00:04:23.487 CC lib/util/crc16.o 00:04:23.487 CC lib/util/crc32.o 00:04:23.487 CC lib/util/crc32c.o 00:04:23.487 CXX lib/trace_parser/trace.o 00:04:23.487 CC lib/ioat/ioat.o 00:04:23.487 CC lib/dma/dma.o 00:04:23.487 CC lib/vfio_user/host/vfio_user_pci.o 00:04:23.487 CC lib/util/crc32_ieee.o 00:04:23.487 CC lib/vfio_user/host/vfio_user.o 00:04:23.487 CC lib/util/crc64.o 00:04:23.487 LIB libspdk_dma.a 00:04:23.487 SO libspdk_dma.so.3.0 00:04:23.487 CC lib/util/dif.o 00:04:23.487 CC lib/util/fd.o 00:04:23.487 CC lib/util/file.o 00:04:23.487 SYMLINK libspdk_dma.so 00:04:23.487 CC lib/util/hexlify.o 00:04:23.487 CC lib/util/iov.o 00:04:23.487 CC lib/util/math.o 00:04:23.487 LIB libspdk_ioat.a 00:04:23.487 SO libspdk_ioat.so.6.0 00:04:23.487 CC lib/util/pipe.o 00:04:23.487 CC lib/util/strerror_tls.o 00:04:23.487 CC lib/util/string.o 00:04:23.487 SYMLINK libspdk_ioat.so 00:04:23.487 CC lib/util/uuid.o 00:04:23.487 LIB libspdk_vfio_user.a 00:04:23.487 CC lib/util/fd_group.o 00:04:23.487 CC lib/util/xor.o 00:04:23.487 SO libspdk_vfio_user.so.4.0 00:04:23.487 CC lib/util/zipf.o 00:04:23.487 SYMLINK libspdk_vfio_user.so 00:04:23.746 LIB libspdk_util.a 00:04:24.004 SO libspdk_util.so.8.0 00:04:24.004 LIB libspdk_trace_parser.a 00:04:24.004 SYMLINK libspdk_util.so 00:04:24.004 SO libspdk_trace_parser.so.4.0 00:04:24.263 CC lib/rdma/common.o 00:04:24.263 CC lib/rdma/rdma_verbs.o 00:04:24.263 CC lib/conf/conf.o 00:04:24.263 CC lib/vmd/vmd.o 00:04:24.263 CC lib/idxd/idxd.o 00:04:24.263 CC lib/idxd/idxd_user.o 00:04:24.263 CC lib/vmd/led.o 00:04:24.263 CC lib/env_dpdk/env.o 00:04:24.263 CC lib/json/json_parse.o 00:04:24.263 SYMLINK libspdk_trace_parser.so 00:04:24.263 CC lib/json/json_util.o 00:04:24.263 CC lib/json/json_write.o 00:04:24.522 CC lib/idxd/idxd_kernel.o 00:04:24.522 LIB libspdk_conf.a 00:04:24.522 CC lib/env_dpdk/memory.o 00:04:24.522 CC lib/env_dpdk/pci.o 00:04:24.522 SO libspdk_conf.so.5.0 00:04:24.522 LIB libspdk_rdma.a 00:04:24.522 CC lib/env_dpdk/init.o 00:04:24.522 SO libspdk_rdma.so.5.0 00:04:24.522 SYMLINK libspdk_conf.so 00:04:24.522 CC lib/env_dpdk/threads.o 00:04:24.522 SYMLINK libspdk_rdma.so 00:04:24.522 CC lib/env_dpdk/pci_ioat.o 00:04:24.522 CC lib/env_dpdk/pci_virtio.o 00:04:24.788 LIB libspdk_json.a 00:04:24.788 SO libspdk_json.so.5.1 00:04:24.788 CC lib/env_dpdk/pci_vmd.o 00:04:24.788 LIB libspdk_idxd.a 00:04:24.788 CC lib/env_dpdk/pci_idxd.o 00:04:24.788 CC lib/env_dpdk/pci_event.o 00:04:24.788 SYMLINK libspdk_json.so 00:04:24.788 SO libspdk_idxd.so.11.0 00:04:24.788 CC lib/env_dpdk/sigbus_handler.o 00:04:24.788 LIB libspdk_vmd.a 00:04:24.788 CC lib/env_dpdk/pci_dpdk.o 00:04:24.788 SYMLINK libspdk_idxd.so 00:04:24.788 SO libspdk_vmd.so.5.0 00:04:24.788 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:24.788 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:25.046 SYMLINK libspdk_vmd.so 00:04:25.046 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:25.046 CC lib/jsonrpc/jsonrpc_server.o 00:04:25.046 CC lib/jsonrpc/jsonrpc_client.o 00:04:25.046 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:25.305 LIB libspdk_jsonrpc.a 00:04:25.305 SO libspdk_jsonrpc.so.5.1 00:04:25.305 SYMLINK libspdk_jsonrpc.so 00:04:25.564 CC lib/rpc/rpc.o 00:04:25.564 LIB libspdk_env_dpdk.a 00:04:25.822 SO libspdk_env_dpdk.so.13.0 00:04:25.822 LIB libspdk_rpc.a 00:04:25.822 SO libspdk_rpc.so.5.0 00:04:25.822 SYMLINK libspdk_rpc.so 00:04:25.822 SYMLINK libspdk_env_dpdk.so 00:04:26.080 CC lib/notify/notify.o 00:04:26.080 CC lib/notify/notify_rpc.o 00:04:26.080 CC lib/sock/sock.o 00:04:26.080 CC lib/sock/sock_rpc.o 00:04:26.080 CC lib/trace/trace.o 00:04:26.080 CC lib/trace/trace_flags.o 00:04:26.080 CC lib/trace/trace_rpc.o 00:04:26.339 LIB libspdk_notify.a 00:04:26.339 SO libspdk_notify.so.5.0 00:04:26.339 LIB libspdk_trace.a 00:04:26.339 SYMLINK libspdk_notify.so 00:04:26.339 SO libspdk_trace.so.9.0 00:04:26.339 SYMLINK libspdk_trace.so 00:04:26.597 LIB libspdk_sock.a 00:04:26.597 SO libspdk_sock.so.8.0 00:04:26.597 CC lib/thread/thread.o 00:04:26.597 CC lib/thread/iobuf.o 00:04:26.597 SYMLINK libspdk_sock.so 00:04:26.856 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:26.856 CC lib/nvme/nvme_ctrlr.o 00:04:26.856 CC lib/nvme/nvme_fabric.o 00:04:26.856 CC lib/nvme/nvme_ns_cmd.o 00:04:26.856 CC lib/nvme/nvme_ns.o 00:04:26.856 CC lib/nvme/nvme_pcie_common.o 00:04:26.856 CC lib/nvme/nvme_qpair.o 00:04:26.856 CC lib/nvme/nvme_pcie.o 00:04:26.856 CC lib/nvme/nvme.o 00:04:27.810 CC lib/nvme/nvme_quirks.o 00:04:27.810 CC lib/nvme/nvme_transport.o 00:04:27.810 CC lib/nvme/nvme_discovery.o 00:04:27.810 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:27.810 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:27.810 CC lib/nvme/nvme_tcp.o 00:04:27.810 CC lib/nvme/nvme_opal.o 00:04:28.068 CC lib/nvme/nvme_io_msg.o 00:04:28.325 CC lib/nvme/nvme_poll_group.o 00:04:28.325 LIB libspdk_thread.a 00:04:28.325 SO libspdk_thread.so.9.0 00:04:28.325 CC lib/nvme/nvme_zns.o 00:04:28.325 CC lib/nvme/nvme_cuse.o 00:04:28.325 SYMLINK libspdk_thread.so 00:04:28.325 CC lib/nvme/nvme_vfio_user.o 00:04:28.583 CC lib/nvme/nvme_rdma.o 00:04:28.583 CC lib/accel/accel.o 00:04:28.583 CC lib/blob/blobstore.o 00:04:28.842 CC lib/accel/accel_rpc.o 00:04:28.842 CC lib/blob/request.o 00:04:29.102 CC lib/blob/zeroes.o 00:04:29.102 CC lib/init/json_config.o 00:04:29.102 CC lib/blob/blob_bs_dev.o 00:04:29.102 CC lib/accel/accel_sw.o 00:04:29.361 CC lib/init/subsystem.o 00:04:29.361 CC lib/init/subsystem_rpc.o 00:04:29.361 CC lib/init/rpc.o 00:04:29.361 CC lib/virtio/virtio.o 00:04:29.361 CC lib/virtio/virtio_vhost_user.o 00:04:29.361 CC lib/virtio/virtio_vfio_user.o 00:04:29.361 CC lib/virtio/virtio_pci.o 00:04:29.620 LIB libspdk_init.a 00:04:29.620 LIB libspdk_accel.a 00:04:29.620 SO libspdk_init.so.4.0 00:04:29.620 SO libspdk_accel.so.14.0 00:04:29.620 SYMLINK libspdk_init.so 00:04:29.620 SYMLINK libspdk_accel.so 00:04:29.878 CC lib/event/app.o 00:04:29.878 CC lib/event/reactor.o 00:04:29.878 CC lib/event/log_rpc.o 00:04:29.878 CC lib/event/scheduler_static.o 00:04:29.878 CC lib/event/app_rpc.o 00:04:29.878 CC lib/bdev/bdev.o 00:04:29.878 CC lib/bdev/bdev_rpc.o 00:04:29.878 LIB libspdk_nvme.a 00:04:29.878 LIB libspdk_virtio.a 00:04:29.878 CC lib/bdev/bdev_zone.o 00:04:29.878 SO libspdk_virtio.so.6.0 00:04:30.138 SYMLINK libspdk_virtio.so 00:04:30.138 CC lib/bdev/part.o 00:04:30.138 CC lib/bdev/scsi_nvme.o 00:04:30.138 SO libspdk_nvme.so.12.0 00:04:30.398 LIB libspdk_event.a 00:04:30.398 SYMLINK libspdk_nvme.so 00:04:30.398 SO libspdk_event.so.12.0 00:04:30.398 SYMLINK libspdk_event.so 00:04:31.775 LIB libspdk_blob.a 00:04:31.775 SO libspdk_blob.so.10.1 00:04:31.775 SYMLINK libspdk_blob.so 00:04:31.775 CC lib/lvol/lvol.o 00:04:31.775 CC lib/blobfs/blobfs.o 00:04:31.775 CC lib/blobfs/tree.o 00:04:32.712 LIB libspdk_bdev.a 00:04:32.712 LIB libspdk_blobfs.a 00:04:32.712 LIB libspdk_lvol.a 00:04:32.712 SO libspdk_bdev.so.14.0 00:04:32.712 SO libspdk_lvol.so.9.1 00:04:32.712 SO libspdk_blobfs.so.9.0 00:04:32.970 SYMLINK libspdk_bdev.so 00:04:32.970 SYMLINK libspdk_blobfs.so 00:04:32.970 SYMLINK libspdk_lvol.so 00:04:32.970 CC lib/ublk/ublk.o 00:04:32.970 CC lib/scsi/dev.o 00:04:32.970 CC lib/ublk/ublk_rpc.o 00:04:32.970 CC lib/scsi/lun.o 00:04:32.970 CC lib/scsi/port.o 00:04:32.970 CC lib/scsi/scsi.o 00:04:32.970 CC lib/nvmf/ctrlr.o 00:04:32.970 CC lib/scsi/scsi_bdev.o 00:04:32.970 CC lib/nbd/nbd.o 00:04:32.970 CC lib/ftl/ftl_core.o 00:04:33.228 CC lib/scsi/scsi_pr.o 00:04:33.228 CC lib/scsi/scsi_rpc.o 00:04:33.228 CC lib/ftl/ftl_init.o 00:04:33.228 CC lib/scsi/task.o 00:04:33.487 CC lib/ftl/ftl_layout.o 00:04:33.487 CC lib/ftl/ftl_debug.o 00:04:33.487 CC lib/nbd/nbd_rpc.o 00:04:33.487 CC lib/ftl/ftl_io.o 00:04:33.487 CC lib/ftl/ftl_sb.o 00:04:33.487 CC lib/ftl/ftl_l2p.o 00:04:33.487 LIB libspdk_scsi.a 00:04:33.487 CC lib/nvmf/ctrlr_discovery.o 00:04:33.763 LIB libspdk_nbd.a 00:04:33.763 SO libspdk_scsi.so.8.0 00:04:33.763 SO libspdk_nbd.so.6.0 00:04:33.763 SYMLINK libspdk_scsi.so 00:04:33.763 SYMLINK libspdk_nbd.so 00:04:33.763 CC lib/ftl/ftl_l2p_flat.o 00:04:33.763 CC lib/nvmf/ctrlr_bdev.o 00:04:33.763 CC lib/ftl/ftl_nv_cache.o 00:04:33.763 LIB libspdk_ublk.a 00:04:33.763 SO libspdk_ublk.so.2.0 00:04:33.763 CC lib/ftl/ftl_band.o 00:04:33.763 CC lib/ftl/ftl_band_ops.o 00:04:33.763 CC lib/nvmf/subsystem.o 00:04:33.763 SYMLINK libspdk_ublk.so 00:04:34.054 CC lib/iscsi/conn.o 00:04:34.054 CC lib/vhost/vhost.o 00:04:34.054 CC lib/nvmf/nvmf.o 00:04:34.054 CC lib/nvmf/nvmf_rpc.o 00:04:34.054 CC lib/iscsi/init_grp.o 00:04:34.314 CC lib/nvmf/transport.o 00:04:34.573 CC lib/ftl/ftl_writer.o 00:04:34.573 CC lib/nvmf/tcp.o 00:04:34.573 CC lib/iscsi/iscsi.o 00:04:34.573 CC lib/ftl/ftl_rq.o 00:04:34.832 CC lib/iscsi/md5.o 00:04:34.832 CC lib/nvmf/rdma.o 00:04:34.832 CC lib/vhost/vhost_rpc.o 00:04:34.832 CC lib/vhost/vhost_scsi.o 00:04:34.832 CC lib/ftl/ftl_reloc.o 00:04:34.832 CC lib/ftl/ftl_l2p_cache.o 00:04:34.832 CC lib/iscsi/param.o 00:04:35.091 CC lib/iscsi/portal_grp.o 00:04:35.091 CC lib/iscsi/tgt_node.o 00:04:35.350 CC lib/vhost/vhost_blk.o 00:04:35.350 CC lib/ftl/ftl_p2l.o 00:04:35.350 CC lib/iscsi/iscsi_subsystem.o 00:04:35.350 CC lib/vhost/rte_vhost_user.o 00:04:35.350 CC lib/iscsi/iscsi_rpc.o 00:04:35.609 CC lib/iscsi/task.o 00:04:35.609 CC lib/ftl/mngt/ftl_mngt.o 00:04:35.609 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:35.869 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:35.869 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:35.869 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:35.869 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:35.869 LIB libspdk_iscsi.a 00:04:35.869 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:35.869 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:35.869 SO libspdk_iscsi.so.7.0 00:04:36.128 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:36.128 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:36.128 SYMLINK libspdk_iscsi.so 00:04:36.128 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:36.128 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:36.128 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:36.128 CC lib/ftl/utils/ftl_conf.o 00:04:36.128 CC lib/ftl/utils/ftl_md.o 00:04:36.128 CC lib/ftl/utils/ftl_mempool.o 00:04:36.387 CC lib/ftl/utils/ftl_bitmap.o 00:04:36.387 CC lib/ftl/utils/ftl_property.o 00:04:36.387 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:36.387 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:36.387 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:36.647 LIB libspdk_vhost.a 00:04:36.647 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:36.647 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:36.647 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:36.647 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:36.647 SO libspdk_vhost.so.7.1 00:04:36.647 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:36.647 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:36.647 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:36.647 CC lib/ftl/base/ftl_base_dev.o 00:04:36.647 CC lib/ftl/base/ftl_base_bdev.o 00:04:36.647 SYMLINK libspdk_vhost.so 00:04:36.647 CC lib/ftl/ftl_trace.o 00:04:36.905 LIB libspdk_nvmf.a 00:04:36.905 SO libspdk_nvmf.so.17.0 00:04:36.905 LIB libspdk_ftl.a 00:04:37.164 SYMLINK libspdk_nvmf.so 00:04:37.164 SO libspdk_ftl.so.8.0 00:04:37.423 SYMLINK libspdk_ftl.so 00:04:37.681 CC module/env_dpdk/env_dpdk_rpc.o 00:04:37.681 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:37.681 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:37.681 CC module/scheduler/gscheduler/gscheduler.o 00:04:37.681 CC module/blob/bdev/blob_bdev.o 00:04:37.681 CC module/sock/posix/posix.o 00:04:37.681 CC module/accel/ioat/accel_ioat.o 00:04:37.681 CC module/sock/uring/uring.o 00:04:37.681 CC module/accel/error/accel_error.o 00:04:37.940 CC module/accel/dsa/accel_dsa.o 00:04:37.940 LIB libspdk_env_dpdk_rpc.a 00:04:37.940 SO libspdk_env_dpdk_rpc.so.5.0 00:04:37.940 LIB libspdk_scheduler_gscheduler.a 00:04:37.940 CC module/accel/ioat/accel_ioat_rpc.o 00:04:37.940 LIB libspdk_scheduler_dynamic.a 00:04:37.940 SO libspdk_scheduler_gscheduler.so.3.0 00:04:37.940 SO libspdk_scheduler_dynamic.so.3.0 00:04:37.941 LIB libspdk_scheduler_dpdk_governor.a 00:04:37.941 SYMLINK libspdk_env_dpdk_rpc.so 00:04:37.941 CC module/accel/dsa/accel_dsa_rpc.o 00:04:37.941 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:38.198 SYMLINK libspdk_scheduler_dynamic.so 00:04:38.198 CC module/accel/error/accel_error_rpc.o 00:04:38.198 SYMLINK libspdk_scheduler_gscheduler.so 00:04:38.198 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:38.198 LIB libspdk_accel_ioat.a 00:04:38.198 LIB libspdk_blob_bdev.a 00:04:38.198 SO libspdk_accel_ioat.so.5.0 00:04:38.198 SO libspdk_blob_bdev.so.10.1 00:04:38.198 LIB libspdk_accel_dsa.a 00:04:38.198 CC module/accel/iaa/accel_iaa.o 00:04:38.198 CC module/accel/iaa/accel_iaa_rpc.o 00:04:38.198 SYMLINK libspdk_accel_ioat.so 00:04:38.198 SYMLINK libspdk_blob_bdev.so 00:04:38.198 SO libspdk_accel_dsa.so.4.0 00:04:38.198 LIB libspdk_accel_error.a 00:04:38.198 SYMLINK libspdk_accel_dsa.so 00:04:38.456 SO libspdk_accel_error.so.1.0 00:04:38.456 SYMLINK libspdk_accel_error.so 00:04:38.456 LIB libspdk_accel_iaa.a 00:04:38.456 CC module/bdev/lvol/vbdev_lvol.o 00:04:38.456 CC module/blobfs/bdev/blobfs_bdev.o 00:04:38.456 CC module/bdev/error/vbdev_error.o 00:04:38.456 CC module/bdev/delay/vbdev_delay.o 00:04:38.456 CC module/bdev/gpt/gpt.o 00:04:38.456 SO libspdk_accel_iaa.so.2.0 00:04:38.456 CC module/bdev/malloc/bdev_malloc.o 00:04:38.456 SYMLINK libspdk_accel_iaa.so 00:04:38.456 CC module/bdev/error/vbdev_error_rpc.o 00:04:38.456 CC module/bdev/null/bdev_null.o 00:04:38.456 LIB libspdk_sock_uring.a 00:04:38.714 SO libspdk_sock_uring.so.4.0 00:04:38.714 CC module/bdev/gpt/vbdev_gpt.o 00:04:38.714 LIB libspdk_sock_posix.a 00:04:38.714 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:38.714 SYMLINK libspdk_sock_uring.so 00:04:38.714 SO libspdk_sock_posix.so.5.0 00:04:38.714 LIB libspdk_bdev_error.a 00:04:38.714 SO libspdk_bdev_error.so.5.0 00:04:38.714 CC module/bdev/null/bdev_null_rpc.o 00:04:38.971 CC module/bdev/nvme/bdev_nvme.o 00:04:38.971 SYMLINK libspdk_sock_posix.so 00:04:38.971 LIB libspdk_blobfs_bdev.a 00:04:38.971 CC module/bdev/passthru/vbdev_passthru.o 00:04:38.971 LIB libspdk_bdev_gpt.a 00:04:38.971 SO libspdk_blobfs_bdev.so.5.0 00:04:38.971 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:38.971 SYMLINK libspdk_bdev_error.so 00:04:38.971 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:38.971 SO libspdk_bdev_gpt.so.5.0 00:04:38.971 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:38.971 CC module/bdev/raid/bdev_raid.o 00:04:38.971 SYMLINK libspdk_blobfs_bdev.so 00:04:38.971 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:38.971 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:38.971 LIB libspdk_bdev_null.a 00:04:38.971 SYMLINK libspdk_bdev_gpt.so 00:04:38.971 SO libspdk_bdev_null.so.5.0 00:04:39.229 LIB libspdk_bdev_malloc.a 00:04:39.229 SYMLINK libspdk_bdev_null.so 00:04:39.229 CC module/bdev/split/vbdev_split.o 00:04:39.229 SO libspdk_bdev_malloc.so.5.0 00:04:39.229 LIB libspdk_bdev_delay.a 00:04:39.229 SO libspdk_bdev_delay.so.5.0 00:04:39.229 SYMLINK libspdk_bdev_malloc.so 00:04:39.229 LIB libspdk_bdev_passthru.a 00:04:39.229 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:39.229 SO libspdk_bdev_passthru.so.5.0 00:04:39.229 LIB libspdk_bdev_lvol.a 00:04:39.229 SO libspdk_bdev_lvol.so.5.0 00:04:39.229 SYMLINK libspdk_bdev_delay.so 00:04:39.229 CC module/bdev/split/vbdev_split_rpc.o 00:04:39.229 CC module/bdev/uring/bdev_uring.o 00:04:39.229 CC module/bdev/aio/bdev_aio.o 00:04:39.229 SYMLINK libspdk_bdev_passthru.so 00:04:39.486 SYMLINK libspdk_bdev_lvol.so 00:04:39.486 CC module/bdev/ftl/bdev_ftl.o 00:04:39.486 CC module/bdev/iscsi/bdev_iscsi.o 00:04:39.486 LIB libspdk_bdev_split.a 00:04:39.486 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:39.486 SO libspdk_bdev_split.so.5.0 00:04:39.486 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:39.486 SYMLINK libspdk_bdev_split.so 00:04:39.486 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:39.745 CC module/bdev/uring/bdev_uring_rpc.o 00:04:39.745 CC module/bdev/aio/bdev_aio_rpc.o 00:04:39.745 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:39.745 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:39.745 LIB libspdk_bdev_zone_block.a 00:04:39.745 LIB libspdk_bdev_uring.a 00:04:39.745 SO libspdk_bdev_zone_block.so.5.0 00:04:39.745 SO libspdk_bdev_uring.so.5.0 00:04:39.745 LIB libspdk_bdev_aio.a 00:04:39.745 LIB libspdk_bdev_ftl.a 00:04:40.003 SYMLINK libspdk_bdev_zone_block.so 00:04:40.003 CC module/bdev/raid/bdev_raid_rpc.o 00:04:40.003 CC module/bdev/nvme/nvme_rpc.o 00:04:40.003 SO libspdk_bdev_aio.so.5.0 00:04:40.003 SYMLINK libspdk_bdev_uring.so 00:04:40.003 CC module/bdev/nvme/bdev_mdns_client.o 00:04:40.003 SO libspdk_bdev_ftl.so.5.0 00:04:40.003 LIB libspdk_bdev_iscsi.a 00:04:40.003 SYMLINK libspdk_bdev_aio.so 00:04:40.003 SYMLINK libspdk_bdev_ftl.so 00:04:40.003 CC module/bdev/nvme/vbdev_opal.o 00:04:40.003 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:40.003 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:40.003 SO libspdk_bdev_iscsi.so.5.0 00:04:40.003 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:40.003 CC module/bdev/raid/bdev_raid_sb.o 00:04:40.003 SYMLINK libspdk_bdev_iscsi.so 00:04:40.003 CC module/bdev/raid/raid0.o 00:04:40.003 CC module/bdev/raid/raid1.o 00:04:40.003 CC module/bdev/raid/concat.o 00:04:40.261 LIB libspdk_bdev_virtio.a 00:04:40.261 SO libspdk_bdev_virtio.so.5.0 00:04:40.261 LIB libspdk_bdev_raid.a 00:04:40.261 SYMLINK libspdk_bdev_virtio.so 00:04:40.525 SO libspdk_bdev_raid.so.5.0 00:04:40.525 SYMLINK libspdk_bdev_raid.so 00:04:41.124 LIB libspdk_bdev_nvme.a 00:04:41.124 SO libspdk_bdev_nvme.so.6.0 00:04:41.383 SYMLINK libspdk_bdev_nvme.so 00:04:41.642 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:41.642 CC module/event/subsystems/sock/sock.o 00:04:41.642 CC module/event/subsystems/iobuf/iobuf.o 00:04:41.642 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:41.642 CC module/event/subsystems/vmd/vmd.o 00:04:41.642 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:41.642 CC module/event/subsystems/scheduler/scheduler.o 00:04:41.901 LIB libspdk_event_vhost_blk.a 00:04:41.901 LIB libspdk_event_vmd.a 00:04:41.901 LIB libspdk_event_sock.a 00:04:41.901 LIB libspdk_event_scheduler.a 00:04:41.901 SO libspdk_event_vhost_blk.so.2.0 00:04:41.901 SO libspdk_event_vmd.so.5.0 00:04:41.901 SO libspdk_event_sock.so.4.0 00:04:41.901 LIB libspdk_event_iobuf.a 00:04:41.901 SO libspdk_event_scheduler.so.3.0 00:04:41.901 SO libspdk_event_iobuf.so.2.0 00:04:41.901 SYMLINK libspdk_event_vhost_blk.so 00:04:41.901 SYMLINK libspdk_event_vmd.so 00:04:41.901 SYMLINK libspdk_event_sock.so 00:04:41.901 SYMLINK libspdk_event_scheduler.so 00:04:41.901 SYMLINK libspdk_event_iobuf.so 00:04:42.160 CC module/event/subsystems/accel/accel.o 00:04:42.419 LIB libspdk_event_accel.a 00:04:42.419 SO libspdk_event_accel.so.5.0 00:04:42.419 SYMLINK libspdk_event_accel.so 00:04:42.678 CC module/event/subsystems/bdev/bdev.o 00:04:42.936 LIB libspdk_event_bdev.a 00:04:42.936 SO libspdk_event_bdev.so.5.0 00:04:42.936 SYMLINK libspdk_event_bdev.so 00:04:43.194 CC module/event/subsystems/ublk/ublk.o 00:04:43.194 CC module/event/subsystems/nbd/nbd.o 00:04:43.194 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:43.194 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:43.194 CC module/event/subsystems/scsi/scsi.o 00:04:43.194 LIB libspdk_event_nbd.a 00:04:43.194 LIB libspdk_event_ublk.a 00:04:43.194 SO libspdk_event_nbd.so.5.0 00:04:43.194 LIB libspdk_event_scsi.a 00:04:43.194 SO libspdk_event_ublk.so.2.0 00:04:43.194 SO libspdk_event_scsi.so.5.0 00:04:43.453 SYMLINK libspdk_event_nbd.so 00:04:43.453 SYMLINK libspdk_event_scsi.so 00:04:43.453 SYMLINK libspdk_event_ublk.so 00:04:43.453 LIB libspdk_event_nvmf.a 00:04:43.453 SO libspdk_event_nvmf.so.5.0 00:04:43.453 SYMLINK libspdk_event_nvmf.so 00:04:43.453 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:43.453 CC module/event/subsystems/iscsi/iscsi.o 00:04:43.715 LIB libspdk_event_vhost_scsi.a 00:04:43.715 LIB libspdk_event_iscsi.a 00:04:43.715 SO libspdk_event_vhost_scsi.so.2.0 00:04:43.715 SO libspdk_event_iscsi.so.5.0 00:04:43.715 SYMLINK libspdk_event_vhost_scsi.so 00:04:43.715 SYMLINK libspdk_event_iscsi.so 00:04:43.974 SO libspdk.so.5.0 00:04:43.974 SYMLINK libspdk.so 00:04:44.233 CXX app/trace/trace.o 00:04:44.233 CC examples/ioat/perf/perf.o 00:04:44.233 CC examples/sock/hello_world/hello_sock.o 00:04:44.233 CC examples/accel/perf/accel_perf.o 00:04:44.233 CC examples/nvme/hello_world/hello_world.o 00:04:44.233 CC examples/bdev/hello_world/hello_bdev.o 00:04:44.233 CC test/bdev/bdevio/bdevio.o 00:04:44.233 CC examples/blob/hello_world/hello_blob.o 00:04:44.233 CC test/accel/dif/dif.o 00:04:44.233 CC test/app/bdev_svc/bdev_svc.o 00:04:44.491 LINK hello_world 00:04:44.491 LINK hello_bdev 00:04:44.491 LINK hello_sock 00:04:44.491 LINK bdev_svc 00:04:44.491 LINK hello_blob 00:04:44.491 LINK ioat_perf 00:04:44.491 LINK spdk_trace 00:04:44.749 LINK accel_perf 00:04:44.749 CC examples/nvme/reconnect/reconnect.o 00:04:44.749 LINK bdevio 00:04:44.749 LINK dif 00:04:44.749 CC examples/ioat/verify/verify.o 00:04:44.749 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:44.749 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:44.749 CC examples/bdev/bdevperf/bdevperf.o 00:04:44.749 CC examples/blob/cli/blobcli.o 00:04:45.009 CC app/trace_record/trace_record.o 00:04:45.009 CC app/nvmf_tgt/nvmf_main.o 00:04:45.009 LINK verify 00:04:45.009 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:45.009 LINK reconnect 00:04:45.009 CC examples/nvme/arbitration/arbitration.o 00:04:45.267 LINK spdk_trace_record 00:04:45.267 LINK nvme_fuzz 00:04:45.267 LINK nvmf_tgt 00:04:45.267 CC test/app/histogram_perf/histogram_perf.o 00:04:45.267 CC test/app/jsoncat/jsoncat.o 00:04:45.267 LINK blobcli 00:04:45.267 CC test/app/stub/stub.o 00:04:45.267 LINK histogram_perf 00:04:45.525 LINK arbitration 00:04:45.525 LINK jsoncat 00:04:45.525 CC examples/vmd/lsvmd/lsvmd.o 00:04:45.525 LINK nvme_manage 00:04:45.525 CC app/iscsi_tgt/iscsi_tgt.o 00:04:45.525 LINK stub 00:04:45.525 LINK bdevperf 00:04:45.525 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:45.784 LINK lsvmd 00:04:45.784 CC examples/nvmf/nvmf/nvmf.o 00:04:45.784 CC examples/nvme/hotplug/hotplug.o 00:04:45.784 CC examples/util/zipf/zipf.o 00:04:45.784 LINK iscsi_tgt 00:04:45.784 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:45.784 CC examples/thread/thread/thread_ex.o 00:04:45.784 CC examples/idxd/perf/perf.o 00:04:45.784 CC examples/vmd/led/led.o 00:04:46.044 CC app/spdk_tgt/spdk_tgt.o 00:04:46.044 LINK zipf 00:04:46.044 LINK hotplug 00:04:46.044 LINK nvmf 00:04:46.044 CC app/spdk_lspci/spdk_lspci.o 00:04:46.044 LINK led 00:04:46.044 LINK thread 00:04:46.044 LINK spdk_tgt 00:04:46.302 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:46.302 LINK vhost_fuzz 00:04:46.302 LINK spdk_lspci 00:04:46.302 LINK idxd_perf 00:04:46.302 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:46.302 CC examples/nvme/abort/abort.o 00:04:46.302 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:46.302 LINK interrupt_tgt 00:04:46.302 CC app/spdk_nvme_perf/perf.o 00:04:46.302 CC app/spdk_nvme_discover/discovery_aer.o 00:04:46.302 CC app/spdk_nvme_identify/identify.o 00:04:46.560 TEST_HEADER include/spdk/accel.h 00:04:46.560 TEST_HEADER include/spdk/accel_module.h 00:04:46.560 TEST_HEADER include/spdk/assert.h 00:04:46.560 TEST_HEADER include/spdk/barrier.h 00:04:46.560 TEST_HEADER include/spdk/base64.h 00:04:46.560 TEST_HEADER include/spdk/bdev.h 00:04:46.560 LINK cmb_copy 00:04:46.560 TEST_HEADER include/spdk/bdev_module.h 00:04:46.560 TEST_HEADER include/spdk/bdev_zone.h 00:04:46.560 TEST_HEADER include/spdk/bit_array.h 00:04:46.560 TEST_HEADER include/spdk/bit_pool.h 00:04:46.560 TEST_HEADER include/spdk/blob_bdev.h 00:04:46.560 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:46.561 TEST_HEADER include/spdk/blobfs.h 00:04:46.561 TEST_HEADER include/spdk/blob.h 00:04:46.561 TEST_HEADER include/spdk/conf.h 00:04:46.561 TEST_HEADER include/spdk/config.h 00:04:46.561 TEST_HEADER include/spdk/cpuset.h 00:04:46.561 TEST_HEADER include/spdk/crc16.h 00:04:46.561 TEST_HEADER include/spdk/crc32.h 00:04:46.561 TEST_HEADER include/spdk/crc64.h 00:04:46.561 TEST_HEADER include/spdk/dif.h 00:04:46.561 TEST_HEADER include/spdk/dma.h 00:04:46.561 TEST_HEADER include/spdk/endian.h 00:04:46.561 TEST_HEADER include/spdk/env_dpdk.h 00:04:46.561 TEST_HEADER include/spdk/env.h 00:04:46.561 TEST_HEADER include/spdk/event.h 00:04:46.561 TEST_HEADER include/spdk/fd_group.h 00:04:46.561 TEST_HEADER include/spdk/fd.h 00:04:46.561 TEST_HEADER include/spdk/file.h 00:04:46.561 TEST_HEADER include/spdk/ftl.h 00:04:46.561 TEST_HEADER include/spdk/gpt_spec.h 00:04:46.561 TEST_HEADER include/spdk/hexlify.h 00:04:46.561 TEST_HEADER include/spdk/histogram_data.h 00:04:46.561 TEST_HEADER include/spdk/idxd.h 00:04:46.561 TEST_HEADER include/spdk/idxd_spec.h 00:04:46.561 TEST_HEADER include/spdk/init.h 00:04:46.561 TEST_HEADER include/spdk/ioat.h 00:04:46.561 TEST_HEADER include/spdk/ioat_spec.h 00:04:46.561 TEST_HEADER include/spdk/iscsi_spec.h 00:04:46.561 LINK iscsi_fuzz 00:04:46.561 TEST_HEADER include/spdk/json.h 00:04:46.561 TEST_HEADER include/spdk/jsonrpc.h 00:04:46.561 TEST_HEADER include/spdk/likely.h 00:04:46.561 TEST_HEADER include/spdk/log.h 00:04:46.561 TEST_HEADER include/spdk/lvol.h 00:04:46.561 TEST_HEADER include/spdk/memory.h 00:04:46.561 TEST_HEADER include/spdk/mmio.h 00:04:46.561 CC test/blobfs/mkfs/mkfs.o 00:04:46.561 TEST_HEADER include/spdk/nbd.h 00:04:46.561 TEST_HEADER include/spdk/notify.h 00:04:46.561 TEST_HEADER include/spdk/nvme.h 00:04:46.561 LINK pmr_persistence 00:04:46.561 TEST_HEADER include/spdk/nvme_intel.h 00:04:46.561 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:46.561 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:46.561 TEST_HEADER include/spdk/nvme_spec.h 00:04:46.561 TEST_HEADER include/spdk/nvme_zns.h 00:04:46.561 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:46.561 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:46.561 TEST_HEADER include/spdk/nvmf.h 00:04:46.561 TEST_HEADER include/spdk/nvmf_spec.h 00:04:46.561 TEST_HEADER include/spdk/nvmf_transport.h 00:04:46.561 TEST_HEADER include/spdk/opal.h 00:04:46.561 TEST_HEADER include/spdk/opal_spec.h 00:04:46.561 TEST_HEADER include/spdk/pci_ids.h 00:04:46.561 TEST_HEADER include/spdk/pipe.h 00:04:46.561 TEST_HEADER include/spdk/queue.h 00:04:46.561 TEST_HEADER include/spdk/reduce.h 00:04:46.561 TEST_HEADER include/spdk/rpc.h 00:04:46.561 TEST_HEADER include/spdk/scheduler.h 00:04:46.561 TEST_HEADER include/spdk/scsi.h 00:04:46.561 TEST_HEADER include/spdk/scsi_spec.h 00:04:46.561 TEST_HEADER include/spdk/sock.h 00:04:46.561 TEST_HEADER include/spdk/stdinc.h 00:04:46.561 TEST_HEADER include/spdk/string.h 00:04:46.561 TEST_HEADER include/spdk/thread.h 00:04:46.561 TEST_HEADER include/spdk/trace.h 00:04:46.561 TEST_HEADER include/spdk/trace_parser.h 00:04:46.561 TEST_HEADER include/spdk/tree.h 00:04:46.561 TEST_HEADER include/spdk/ublk.h 00:04:46.561 TEST_HEADER include/spdk/util.h 00:04:46.561 TEST_HEADER include/spdk/uuid.h 00:04:46.561 TEST_HEADER include/spdk/version.h 00:04:46.561 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:46.561 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:46.561 TEST_HEADER include/spdk/vhost.h 00:04:46.561 TEST_HEADER include/spdk/vmd.h 00:04:46.561 TEST_HEADER include/spdk/xor.h 00:04:46.561 TEST_HEADER include/spdk/zipf.h 00:04:46.561 CC app/spdk_top/spdk_top.o 00:04:46.561 LINK spdk_nvme_discover 00:04:46.561 CXX test/cpp_headers/accel.o 00:04:46.819 CXX test/cpp_headers/accel_module.o 00:04:46.819 LINK abort 00:04:46.819 LINK mkfs 00:04:46.819 CC test/dma/test_dma/test_dma.o 00:04:46.819 CXX test/cpp_headers/assert.o 00:04:46.819 CC test/event/event_perf/event_perf.o 00:04:47.090 CC test/env/mem_callbacks/mem_callbacks.o 00:04:47.090 CC test/lvol/esnap/esnap.o 00:04:47.090 CC test/nvme/aer/aer.o 00:04:47.090 CC app/vhost/vhost.o 00:04:47.090 CXX test/cpp_headers/barrier.o 00:04:47.090 LINK event_perf 00:04:47.364 LINK test_dma 00:04:47.364 LINK spdk_nvme_identify 00:04:47.364 CXX test/cpp_headers/base64.o 00:04:47.364 LINK vhost 00:04:47.364 LINK spdk_nvme_perf 00:04:47.364 CC test/event/reactor/reactor.o 00:04:47.364 LINK aer 00:04:47.364 CXX test/cpp_headers/bdev.o 00:04:47.364 CXX test/cpp_headers/bdev_module.o 00:04:47.364 CXX test/cpp_headers/bdev_zone.o 00:04:47.364 CXX test/cpp_headers/bit_array.o 00:04:47.623 CC test/rpc_client/rpc_client_test.o 00:04:47.623 LINK reactor 00:04:47.623 LINK spdk_top 00:04:47.623 LINK mem_callbacks 00:04:47.623 CC test/nvme/reset/reset.o 00:04:47.623 CXX test/cpp_headers/bit_pool.o 00:04:47.623 CC test/nvme/sgl/sgl.o 00:04:47.623 LINK rpc_client_test 00:04:47.623 CC test/nvme/e2edp/nvme_dp.o 00:04:47.623 CC test/event/reactor_perf/reactor_perf.o 00:04:47.623 CC test/nvme/overhead/overhead.o 00:04:47.881 CC test/env/vtophys/vtophys.o 00:04:47.881 CC app/spdk_dd/spdk_dd.o 00:04:47.881 CXX test/cpp_headers/blob_bdev.o 00:04:47.881 LINK reactor_perf 00:04:47.881 LINK reset 00:04:47.881 CXX test/cpp_headers/blobfs_bdev.o 00:04:47.881 LINK vtophys 00:04:47.881 LINK sgl 00:04:47.881 LINK nvme_dp 00:04:48.138 LINK overhead 00:04:48.138 CXX test/cpp_headers/blobfs.o 00:04:48.138 CC test/event/app_repeat/app_repeat.o 00:04:48.138 CC test/nvme/err_injection/err_injection.o 00:04:48.138 CXX test/cpp_headers/blob.o 00:04:48.138 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:48.138 CC test/event/scheduler/scheduler.o 00:04:48.138 LINK spdk_dd 00:04:48.138 CC test/nvme/startup/startup.o 00:04:48.395 CC test/env/memory/memory_ut.o 00:04:48.395 LINK app_repeat 00:04:48.395 CXX test/cpp_headers/conf.o 00:04:48.395 LINK env_dpdk_post_init 00:04:48.395 LINK err_injection 00:04:48.395 CC test/env/pci/pci_ut.o 00:04:48.395 LINK scheduler 00:04:48.395 LINK startup 00:04:48.395 CXX test/cpp_headers/config.o 00:04:48.653 CXX test/cpp_headers/cpuset.o 00:04:48.653 CC app/fio/nvme/fio_plugin.o 00:04:48.653 CC test/nvme/reserve/reserve.o 00:04:48.653 CC test/nvme/simple_copy/simple_copy.o 00:04:48.653 CC test/thread/poller_perf/poller_perf.o 00:04:48.653 CC test/nvme/connect_stress/connect_stress.o 00:04:48.653 CXX test/cpp_headers/crc16.o 00:04:48.653 CC test/nvme/boot_partition/boot_partition.o 00:04:48.653 LINK pci_ut 00:04:48.912 LINK poller_perf 00:04:48.912 LINK reserve 00:04:48.912 LINK simple_copy 00:04:48.912 CXX test/cpp_headers/crc32.o 00:04:48.912 LINK connect_stress 00:04:48.912 LINK boot_partition 00:04:48.912 CXX test/cpp_headers/crc64.o 00:04:48.912 CC test/nvme/compliance/nvme_compliance.o 00:04:49.170 CXX test/cpp_headers/dif.o 00:04:49.170 LINK spdk_nvme 00:04:49.170 CC test/nvme/fused_ordering/fused_ordering.o 00:04:49.170 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:49.170 CC test/nvme/fdp/fdp.o 00:04:49.170 CXX test/cpp_headers/dma.o 00:04:49.170 CC test/nvme/cuse/cuse.o 00:04:49.170 CXX test/cpp_headers/endian.o 00:04:49.170 LINK memory_ut 00:04:49.170 CC app/fio/bdev/fio_plugin.o 00:04:49.429 LINK doorbell_aers 00:04:49.429 CXX test/cpp_headers/env_dpdk.o 00:04:49.429 LINK fused_ordering 00:04:49.429 LINK nvme_compliance 00:04:49.429 CXX test/cpp_headers/env.o 00:04:49.429 LINK fdp 00:04:49.429 CXX test/cpp_headers/event.o 00:04:49.429 CXX test/cpp_headers/fd_group.o 00:04:49.429 CXX test/cpp_headers/fd.o 00:04:49.429 CXX test/cpp_headers/file.o 00:04:49.429 CXX test/cpp_headers/ftl.o 00:04:49.688 CXX test/cpp_headers/gpt_spec.o 00:04:49.688 CXX test/cpp_headers/hexlify.o 00:04:49.688 CXX test/cpp_headers/histogram_data.o 00:04:49.688 CXX test/cpp_headers/idxd.o 00:04:49.688 CXX test/cpp_headers/idxd_spec.o 00:04:49.688 CXX test/cpp_headers/init.o 00:04:49.688 CXX test/cpp_headers/ioat.o 00:04:49.688 CXX test/cpp_headers/ioat_spec.o 00:04:49.688 CXX test/cpp_headers/iscsi_spec.o 00:04:49.688 CXX test/cpp_headers/json.o 00:04:49.946 LINK spdk_bdev 00:04:49.946 CXX test/cpp_headers/jsonrpc.o 00:04:49.946 CXX test/cpp_headers/likely.o 00:04:49.946 CXX test/cpp_headers/log.o 00:04:49.946 CXX test/cpp_headers/lvol.o 00:04:49.946 CXX test/cpp_headers/memory.o 00:04:49.946 CXX test/cpp_headers/mmio.o 00:04:49.946 CXX test/cpp_headers/nbd.o 00:04:49.946 CXX test/cpp_headers/notify.o 00:04:49.946 CXX test/cpp_headers/nvme.o 00:04:49.946 CXX test/cpp_headers/nvme_intel.o 00:04:49.946 CXX test/cpp_headers/nvme_ocssd.o 00:04:50.204 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:50.204 CXX test/cpp_headers/nvme_spec.o 00:04:50.205 CXX test/cpp_headers/nvme_zns.o 00:04:50.205 CXX test/cpp_headers/nvmf_cmd.o 00:04:50.205 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:50.205 LINK cuse 00:04:50.205 CXX test/cpp_headers/nvmf.o 00:04:50.205 CXX test/cpp_headers/nvmf_spec.o 00:04:50.205 CXX test/cpp_headers/nvmf_transport.o 00:04:50.205 CXX test/cpp_headers/opal.o 00:04:50.205 CXX test/cpp_headers/opal_spec.o 00:04:50.205 CXX test/cpp_headers/pci_ids.o 00:04:50.462 CXX test/cpp_headers/pipe.o 00:04:50.462 CXX test/cpp_headers/queue.o 00:04:50.462 CXX test/cpp_headers/reduce.o 00:04:50.462 CXX test/cpp_headers/rpc.o 00:04:50.462 CXX test/cpp_headers/scheduler.o 00:04:50.462 CXX test/cpp_headers/scsi.o 00:04:50.462 CXX test/cpp_headers/scsi_spec.o 00:04:50.462 CXX test/cpp_headers/sock.o 00:04:50.462 CXX test/cpp_headers/stdinc.o 00:04:50.463 CXX test/cpp_headers/string.o 00:04:50.463 CXX test/cpp_headers/thread.o 00:04:50.721 CXX test/cpp_headers/trace.o 00:04:50.721 CXX test/cpp_headers/trace_parser.o 00:04:50.721 CXX test/cpp_headers/tree.o 00:04:50.721 CXX test/cpp_headers/ublk.o 00:04:50.721 CXX test/cpp_headers/util.o 00:04:50.721 CXX test/cpp_headers/uuid.o 00:04:50.721 CXX test/cpp_headers/version.o 00:04:50.721 CXX test/cpp_headers/vfio_user_pci.o 00:04:50.721 CXX test/cpp_headers/vfio_user_spec.o 00:04:50.721 CXX test/cpp_headers/vhost.o 00:04:50.721 CXX test/cpp_headers/vmd.o 00:04:50.721 CXX test/cpp_headers/xor.o 00:04:50.721 CXX test/cpp_headers/zipf.o 00:04:52.095 LINK esnap 00:04:52.355 ************************************ 00:04:52.355 END TEST make 00:04:52.355 ************************************ 00:04:52.355 00:04:52.355 real 0m52.574s 00:04:52.355 user 4m53.483s 00:04:52.355 sys 1m5.301s 00:04:52.355 07:14:14 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:04:52.355 07:14:14 -- common/autotest_common.sh@10 -- $ set +x 00:04:52.355 07:14:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:52.355 07:14:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:52.355 07:14:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:52.681 07:14:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:52.681 07:14:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:52.681 07:14:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:52.681 07:14:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:52.681 07:14:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:52.681 07:14:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:52.681 07:14:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.681 07:14:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:52.681 07:14:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:52.681 07:14:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:52.681 07:14:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:52.681 07:14:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:52.681 07:14:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:52.681 07:14:14 -- scripts/common.sh@344 -- # : 1 00:04:52.681 07:14:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:52.681 07:14:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.681 07:14:14 -- scripts/common.sh@364 -- # decimal 1 00:04:52.681 07:14:14 -- scripts/common.sh@352 -- # local d=1 00:04:52.681 07:14:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.681 07:14:14 -- scripts/common.sh@354 -- # echo 1 00:04:52.681 07:14:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:52.681 07:14:14 -- scripts/common.sh@365 -- # decimal 2 00:04:52.681 07:14:14 -- scripts/common.sh@352 -- # local d=2 00:04:52.681 07:14:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.682 07:14:14 -- scripts/common.sh@354 -- # echo 2 00:04:52.682 07:14:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:52.682 07:14:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:52.682 07:14:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:52.682 07:14:14 -- scripts/common.sh@367 -- # return 0 00:04:52.682 07:14:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.682 07:14:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.682 --rc genhtml_branch_coverage=1 00:04:52.682 --rc genhtml_function_coverage=1 00:04:52.682 --rc genhtml_legend=1 00:04:52.682 --rc geninfo_all_blocks=1 00:04:52.682 --rc geninfo_unexecuted_blocks=1 00:04:52.682 00:04:52.682 ' 00:04:52.682 07:14:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.682 --rc genhtml_branch_coverage=1 00:04:52.682 --rc genhtml_function_coverage=1 00:04:52.682 --rc genhtml_legend=1 00:04:52.682 --rc geninfo_all_blocks=1 00:04:52.682 --rc geninfo_unexecuted_blocks=1 00:04:52.682 00:04:52.682 ' 00:04:52.682 07:14:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.682 --rc genhtml_branch_coverage=1 00:04:52.682 --rc genhtml_function_coverage=1 00:04:52.682 --rc genhtml_legend=1 00:04:52.682 --rc geninfo_all_blocks=1 00:04:52.682 --rc geninfo_unexecuted_blocks=1 00:04:52.682 00:04:52.682 ' 00:04:52.682 07:14:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.682 --rc genhtml_branch_coverage=1 00:04:52.682 --rc genhtml_function_coverage=1 00:04:52.682 --rc genhtml_legend=1 00:04:52.682 --rc geninfo_all_blocks=1 00:04:52.682 --rc geninfo_unexecuted_blocks=1 00:04:52.682 00:04:52.682 ' 00:04:52.682 07:14:14 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.682 07:14:14 -- nvmf/common.sh@7 -- # uname -s 00:04:52.682 07:14:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.682 07:14:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.682 07:14:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.682 07:14:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.682 07:14:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.682 07:14:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.682 07:14:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.682 07:14:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.682 07:14:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.682 07:14:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.682 07:14:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:04:52.682 07:14:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:04:52.682 07:14:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.682 07:14:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.682 07:14:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:52.682 07:14:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.682 07:14:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.682 07:14:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.682 07:14:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.682 07:14:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.682 07:14:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.682 07:14:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.682 07:14:14 -- paths/export.sh@5 -- # export PATH 00:04:52.682 07:14:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.682 07:14:14 -- nvmf/common.sh@46 -- # : 0 00:04:52.682 07:14:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:52.682 07:14:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:52.682 07:14:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:52.682 07:14:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.682 07:14:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.682 07:14:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:52.682 07:14:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:52.682 07:14:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:52.682 07:14:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:52.682 07:14:14 -- spdk/autotest.sh@32 -- # uname -s 00:04:52.682 07:14:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:52.682 07:14:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:52.682 07:14:14 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.682 07:14:14 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:52.682 07:14:14 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:52.682 07:14:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:52.682 07:14:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:52.682 07:14:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:52.682 07:14:14 -- spdk/autotest.sh@48 -- # udevadm_pid=60046 00:04:52.682 07:14:14 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:52.682 07:14:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:52.682 07:14:14 -- spdk/autotest.sh@54 -- # echo 60049 00:04:52.682 07:14:14 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:52.682 07:14:14 -- spdk/autotest.sh@56 -- # echo 60051 00:04:52.682 07:14:14 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:52.682 07:14:14 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:52.682 07:14:14 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:52.682 07:14:14 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:52.682 07:14:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:52.682 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:04:52.682 07:14:14 -- spdk/autotest.sh@70 -- # create_test_list 00:04:52.682 07:14:14 -- common/autotest_common.sh@746 -- # xtrace_disable 00:04:52.682 07:14:14 -- common/autotest_common.sh@10 -- # set +x 00:04:52.682 07:14:14 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:52.682 07:14:14 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:52.682 07:14:14 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:52.682 07:14:14 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:52.682 07:14:14 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:52.682 07:14:14 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:52.682 07:14:14 -- common/autotest_common.sh@1450 -- # uname 00:04:52.682 07:14:14 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:04:52.682 07:14:14 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:52.682 07:14:14 -- common/autotest_common.sh@1470 -- # uname 00:04:52.682 07:14:14 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:04:52.682 07:14:14 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:04:52.682 07:14:14 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:52.977 lcov: LCOV version 1.15 00:04:52.977 07:14:14 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:01.090 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:01.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:01.090 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:01.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:01.090 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:01.090 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:27.623 07:14:45 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:27.623 07:14:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:27.623 07:14:45 -- common/autotest_common.sh@10 -- # set +x 00:05:27.623 07:14:45 -- spdk/autotest.sh@89 -- # rm -f 00:05:27.623 07:14:45 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:27.623 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:27.623 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:27.623 07:14:46 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:27.623 07:14:46 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:27.623 07:14:46 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:27.623 07:14:46 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:27.623 07:14:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.623 07:14:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:27.623 07:14:46 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:27.623 07:14:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:27.623 07:14:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.623 07:14:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.623 07:14:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:27.623 07:14:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:27.623 07:14:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:27.623 07:14:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.623 07:14:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.623 07:14:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:27.623 07:14:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:27.623 07:14:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:27.623 07:14:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.623 07:14:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.623 07:14:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:27.623 07:14:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:27.623 07:14:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:27.623 07:14:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.623 07:14:46 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:27.623 07:14:46 -- spdk/autotest.sh@108 -- # grep -v p 00:05:27.623 07:14:46 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:27.623 07:14:46 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:27.623 07:14:46 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:27.623 07:14:46 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:27.623 07:14:46 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:27.623 07:14:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:27.623 No valid GPT data, bailing 00:05:27.623 07:14:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:27.623 07:14:46 -- scripts/common.sh@393 -- # pt= 00:05:27.623 07:14:46 -- scripts/common.sh@394 -- # return 1 00:05:27.623 07:14:46 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:27.623 1+0 records in 00:05:27.623 1+0 records out 00:05:27.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394943 s, 266 MB/s 00:05:27.623 07:14:46 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:27.623 07:14:46 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:27.623 07:14:46 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:27.623 07:14:46 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:27.623 07:14:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:27.623 No valid GPT data, bailing 00:05:27.623 07:14:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:27.623 07:14:46 -- scripts/common.sh@393 -- # pt= 00:05:27.623 07:14:46 -- scripts/common.sh@394 -- # return 1 00:05:27.623 07:14:46 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:27.623 1+0 records in 00:05:27.623 1+0 records out 00:05:27.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525706 s, 199 MB/s 00:05:27.623 07:14:46 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:27.623 07:14:46 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:27.623 07:14:46 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:27.623 07:14:46 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:27.623 07:14:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:27.623 No valid GPT data, bailing 00:05:27.623 07:14:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:27.623 07:14:46 -- scripts/common.sh@393 -- # pt= 00:05:27.623 07:14:46 -- scripts/common.sh@394 -- # return 1 00:05:27.623 07:14:46 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:27.623 1+0 records in 00:05:27.623 1+0 records out 00:05:27.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494212 s, 212 MB/s 00:05:27.623 07:14:46 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:27.623 07:14:46 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:27.623 07:14:46 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:27.623 07:14:46 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:27.623 07:14:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:27.623 No valid GPT data, bailing 00:05:27.623 07:14:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:27.623 07:14:46 -- scripts/common.sh@393 -- # pt= 00:05:27.623 07:14:46 -- scripts/common.sh@394 -- # return 1 00:05:27.623 07:14:46 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:27.623 1+0 records in 00:05:27.623 1+0 records out 00:05:27.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044722 s, 234 MB/s 00:05:27.623 07:14:46 -- spdk/autotest.sh@116 -- # sync 00:05:27.623 07:14:46 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:27.623 07:14:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:27.623 07:14:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:27.623 07:14:48 -- spdk/autotest.sh@122 -- # uname -s 00:05:27.623 07:14:48 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:27.623 07:14:48 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:27.623 07:14:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.623 07:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.623 07:14:48 -- common/autotest_common.sh@10 -- # set +x 00:05:27.623 ************************************ 00:05:27.623 START TEST setup.sh 00:05:27.623 ************************************ 00:05:27.623 07:14:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:27.623 * Looking for test storage... 00:05:27.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:27.623 07:14:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.623 07:14:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.623 07:14:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:27.623 07:14:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:27.623 07:14:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:27.623 07:14:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:27.623 07:14:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:27.623 07:14:48 -- scripts/common.sh@335 -- # IFS=.-: 00:05:27.623 07:14:48 -- scripts/common.sh@335 -- # read -ra ver1 00:05:27.623 07:14:48 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.623 07:14:48 -- scripts/common.sh@336 -- # read -ra ver2 00:05:27.623 07:14:48 -- scripts/common.sh@337 -- # local 'op=<' 00:05:27.623 07:14:48 -- scripts/common.sh@339 -- # ver1_l=2 00:05:27.623 07:14:48 -- scripts/common.sh@340 -- # ver2_l=1 00:05:27.623 07:14:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:27.623 07:14:48 -- scripts/common.sh@343 -- # case "$op" in 00:05:27.623 07:14:48 -- scripts/common.sh@344 -- # : 1 00:05:27.623 07:14:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:27.623 07:14:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.623 07:14:48 -- scripts/common.sh@364 -- # decimal 1 00:05:27.623 07:14:48 -- scripts/common.sh@352 -- # local d=1 00:05:27.623 07:14:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.623 07:14:48 -- scripts/common.sh@354 -- # echo 1 00:05:27.623 07:14:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:27.623 07:14:48 -- scripts/common.sh@365 -- # decimal 2 00:05:27.623 07:14:48 -- scripts/common.sh@352 -- # local d=2 00:05:27.623 07:14:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.623 07:14:48 -- scripts/common.sh@354 -- # echo 2 00:05:27.623 07:14:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:27.623 07:14:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:27.623 07:14:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:27.623 07:14:48 -- scripts/common.sh@367 -- # return 0 00:05:27.623 07:14:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.623 07:14:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:27.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.623 --rc genhtml_branch_coverage=1 00:05:27.623 --rc genhtml_function_coverage=1 00:05:27.623 --rc genhtml_legend=1 00:05:27.623 --rc geninfo_all_blocks=1 00:05:27.623 --rc geninfo_unexecuted_blocks=1 00:05:27.623 00:05:27.623 ' 00:05:27.623 07:14:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:27.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.624 --rc genhtml_branch_coverage=1 00:05:27.624 --rc genhtml_function_coverage=1 00:05:27.624 --rc genhtml_legend=1 00:05:27.624 --rc geninfo_all_blocks=1 00:05:27.624 --rc geninfo_unexecuted_blocks=1 00:05:27.624 00:05:27.624 ' 00:05:27.624 07:14:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:27.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.624 --rc genhtml_branch_coverage=1 00:05:27.624 --rc genhtml_function_coverage=1 00:05:27.624 --rc genhtml_legend=1 00:05:27.624 --rc geninfo_all_blocks=1 00:05:27.624 --rc geninfo_unexecuted_blocks=1 00:05:27.624 00:05:27.624 ' 00:05:27.624 07:14:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:27.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.624 --rc genhtml_branch_coverage=1 00:05:27.624 --rc genhtml_function_coverage=1 00:05:27.624 --rc genhtml_legend=1 00:05:27.624 --rc geninfo_all_blocks=1 00:05:27.624 --rc geninfo_unexecuted_blocks=1 00:05:27.624 00:05:27.624 ' 00:05:27.624 07:14:48 -- setup/test-setup.sh@10 -- # uname -s 00:05:27.624 07:14:48 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:27.624 07:14:48 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:27.624 07:14:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.624 07:14:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.624 07:14:48 -- common/autotest_common.sh@10 -- # set +x 00:05:27.624 ************************************ 00:05:27.624 START TEST acl 00:05:27.624 ************************************ 00:05:27.624 07:14:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:27.624 * Looking for test storage... 00:05:27.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:27.624 07:14:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:27.624 07:14:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:27.624 07:14:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:27.624 07:14:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:27.624 07:14:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:27.624 07:14:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:27.624 07:14:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:27.624 07:14:49 -- scripts/common.sh@335 -- # IFS=.-: 00:05:27.624 07:14:49 -- scripts/common.sh@335 -- # read -ra ver1 00:05:27.624 07:14:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.624 07:14:49 -- scripts/common.sh@336 -- # read -ra ver2 00:05:27.624 07:14:49 -- scripts/common.sh@337 -- # local 'op=<' 00:05:27.624 07:14:49 -- scripts/common.sh@339 -- # ver1_l=2 00:05:27.624 07:14:49 -- scripts/common.sh@340 -- # ver2_l=1 00:05:27.624 07:14:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:27.624 07:14:49 -- scripts/common.sh@343 -- # case "$op" in 00:05:27.624 07:14:49 -- scripts/common.sh@344 -- # : 1 00:05:27.624 07:14:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:27.624 07:14:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.624 07:14:49 -- scripts/common.sh@364 -- # decimal 1 00:05:27.624 07:14:49 -- scripts/common.sh@352 -- # local d=1 00:05:27.624 07:14:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.624 07:14:49 -- scripts/common.sh@354 -- # echo 1 00:05:27.624 07:14:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:27.624 07:14:49 -- scripts/common.sh@365 -- # decimal 2 00:05:27.624 07:14:49 -- scripts/common.sh@352 -- # local d=2 00:05:27.624 07:14:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.624 07:14:49 -- scripts/common.sh@354 -- # echo 2 00:05:27.624 07:14:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:27.624 07:14:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:27.624 07:14:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:27.624 07:14:49 -- scripts/common.sh@367 -- # return 0 00:05:27.624 07:14:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.624 07:14:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:27.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.624 --rc genhtml_branch_coverage=1 00:05:27.624 --rc genhtml_function_coverage=1 00:05:27.624 --rc genhtml_legend=1 00:05:27.624 --rc geninfo_all_blocks=1 00:05:27.624 --rc geninfo_unexecuted_blocks=1 00:05:27.624 00:05:27.624 ' 00:05:27.624 07:14:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:27.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.624 --rc genhtml_branch_coverage=1 00:05:27.624 --rc genhtml_function_coverage=1 00:05:27.624 --rc genhtml_legend=1 00:05:27.624 --rc geninfo_all_blocks=1 00:05:27.624 --rc geninfo_unexecuted_blocks=1 00:05:27.624 00:05:27.624 ' 00:05:27.624 07:14:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:27.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.624 --rc genhtml_branch_coverage=1 00:05:27.624 --rc genhtml_function_coverage=1 00:05:27.624 --rc genhtml_legend=1 00:05:27.624 --rc geninfo_all_blocks=1 00:05:27.624 --rc geninfo_unexecuted_blocks=1 00:05:27.624 00:05:27.624 ' 00:05:27.624 07:14:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:27.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.624 --rc genhtml_branch_coverage=1 00:05:27.624 --rc genhtml_function_coverage=1 00:05:27.624 --rc genhtml_legend=1 00:05:27.624 --rc geninfo_all_blocks=1 00:05:27.624 --rc geninfo_unexecuted_blocks=1 00:05:27.624 00:05:27.624 ' 00:05:27.624 07:14:49 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:27.624 07:14:49 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:27.624 07:14:49 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:27.624 07:14:49 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:27.624 07:14:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.624 07:14:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:27.624 07:14:49 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:27.624 07:14:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:27.624 07:14:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.624 07:14:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.624 07:14:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:27.624 07:14:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:27.624 07:14:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:27.624 07:14:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.624 07:14:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.624 07:14:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:27.624 07:14:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:27.624 07:14:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:27.624 07:14:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.624 07:14:49 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:27.624 07:14:49 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:27.624 07:14:49 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:27.624 07:14:49 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:27.624 07:14:49 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:27.624 07:14:49 -- setup/acl.sh@12 -- # devs=() 00:05:27.624 07:14:49 -- setup/acl.sh@12 -- # declare -a devs 00:05:27.624 07:14:49 -- setup/acl.sh@13 -- # drivers=() 00:05:27.624 07:14:49 -- setup/acl.sh@13 -- # declare -A drivers 00:05:27.624 07:14:49 -- setup/acl.sh@51 -- # setup reset 00:05:27.624 07:14:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:27.624 07:14:49 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.624 07:14:49 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:27.624 07:14:49 -- setup/acl.sh@16 -- # local dev driver 00:05:27.624 07:14:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:27.624 07:14:49 -- setup/acl.sh@15 -- # setup output status 00:05:27.624 07:14:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.624 07:14:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:27.883 Hugepages 00:05:27.883 node hugesize free / total 00:05:27.883 07:14:50 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:27.883 07:14:50 -- setup/acl.sh@19 -- # continue 00:05:27.883 07:14:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:27.883 00:05:27.883 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:27.883 07:14:50 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:27.883 07:14:50 -- setup/acl.sh@19 -- # continue 00:05:27.883 07:14:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:27.883 07:14:50 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:27.883 07:14:50 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:27.883 07:14:50 -- setup/acl.sh@20 -- # continue 00:05:27.883 07:14:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:28.142 07:14:50 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:28.142 07:14:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:28.142 07:14:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:28.142 07:14:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:28.142 07:14:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:28.142 07:14:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:28.142 07:14:50 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:28.142 07:14:50 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:28.142 07:14:50 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:28.142 07:14:50 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:28.142 07:14:50 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:28.142 07:14:50 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:28.142 07:14:50 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:28.142 07:14:50 -- setup/acl.sh@54 -- # run_test denied denied 00:05:28.142 07:14:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.142 07:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.142 07:14:50 -- common/autotest_common.sh@10 -- # set +x 00:05:28.142 ************************************ 00:05:28.142 START TEST denied 00:05:28.142 ************************************ 00:05:28.142 07:14:50 -- common/autotest_common.sh@1114 -- # denied 00:05:28.142 07:14:50 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:28.142 07:14:50 -- setup/acl.sh@38 -- # setup output config 00:05:28.142 07:14:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.142 07:14:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.142 07:14:50 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:29.147 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:29.147 07:14:51 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:29.147 07:14:51 -- setup/acl.sh@28 -- # local dev driver 00:05:29.147 07:14:51 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:29.147 07:14:51 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:29.147 07:14:51 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:29.147 07:14:51 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:29.148 07:14:51 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:29.148 07:14:51 -- setup/acl.sh@41 -- # setup reset 00:05:29.148 07:14:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.148 07:14:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.715 00:05:29.715 real 0m1.507s 00:05:29.715 user 0m0.619s 00:05:29.715 sys 0m0.842s 00:05:29.715 07:14:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.715 07:14:51 -- common/autotest_common.sh@10 -- # set +x 00:05:29.715 ************************************ 00:05:29.715 END TEST denied 00:05:29.715 ************************************ 00:05:29.715 07:14:51 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:29.715 07:14:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.715 07:14:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.715 07:14:51 -- common/autotest_common.sh@10 -- # set +x 00:05:29.715 ************************************ 00:05:29.715 START TEST allowed 00:05:29.715 ************************************ 00:05:29.715 07:14:51 -- common/autotest_common.sh@1114 -- # allowed 00:05:29.715 07:14:51 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:29.715 07:14:51 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:29.715 07:14:51 -- setup/acl.sh@45 -- # setup output config 00:05:29.715 07:14:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.715 07:14:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.653 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:30.653 07:14:52 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:30.653 07:14:52 -- setup/acl.sh@28 -- # local dev driver 00:05:30.653 07:14:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:30.653 07:14:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:30.653 07:14:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:30.653 07:14:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:30.653 07:14:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:30.653 07:14:52 -- setup/acl.sh@48 -- # setup reset 00:05:30.653 07:14:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:30.653 07:14:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.221 00:05:31.221 real 0m1.557s 00:05:31.221 user 0m0.718s 00:05:31.221 sys 0m0.834s 00:05:31.221 07:14:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.221 ************************************ 00:05:31.221 END TEST allowed 00:05:31.221 ************************************ 00:05:31.221 07:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:31.221 ************************************ 00:05:31.221 END TEST acl 00:05:31.221 ************************************ 00:05:31.221 00:05:31.221 real 0m4.462s 00:05:31.221 user 0m2.007s 00:05:31.221 sys 0m2.443s 00:05:31.221 07:14:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:31.221 07:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:31.221 07:14:53 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:31.221 07:14:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.221 07:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.221 07:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:31.221 ************************************ 00:05:31.221 START TEST hugepages 00:05:31.221 ************************************ 00:05:31.221 07:14:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:31.482 * Looking for test storage... 00:05:31.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:31.482 07:14:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:31.482 07:14:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:31.482 07:14:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:31.482 07:14:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:31.482 07:14:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:31.482 07:14:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:31.482 07:14:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:31.482 07:14:53 -- scripts/common.sh@335 -- # IFS=.-: 00:05:31.482 07:14:53 -- scripts/common.sh@335 -- # read -ra ver1 00:05:31.482 07:14:53 -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.482 07:14:53 -- scripts/common.sh@336 -- # read -ra ver2 00:05:31.482 07:14:53 -- scripts/common.sh@337 -- # local 'op=<' 00:05:31.482 07:14:53 -- scripts/common.sh@339 -- # ver1_l=2 00:05:31.482 07:14:53 -- scripts/common.sh@340 -- # ver2_l=1 00:05:31.482 07:14:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:31.482 07:14:53 -- scripts/common.sh@343 -- # case "$op" in 00:05:31.482 07:14:53 -- scripts/common.sh@344 -- # : 1 00:05:31.482 07:14:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:31.482 07:14:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.482 07:14:53 -- scripts/common.sh@364 -- # decimal 1 00:05:31.482 07:14:53 -- scripts/common.sh@352 -- # local d=1 00:05:31.482 07:14:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.482 07:14:53 -- scripts/common.sh@354 -- # echo 1 00:05:31.482 07:14:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:31.482 07:14:53 -- scripts/common.sh@365 -- # decimal 2 00:05:31.482 07:14:53 -- scripts/common.sh@352 -- # local d=2 00:05:31.482 07:14:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.482 07:14:53 -- scripts/common.sh@354 -- # echo 2 00:05:31.482 07:14:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:31.482 07:14:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:31.482 07:14:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:31.482 07:14:53 -- scripts/common.sh@367 -- # return 0 00:05:31.482 07:14:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.482 07:14:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:31.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.482 --rc genhtml_branch_coverage=1 00:05:31.482 --rc genhtml_function_coverage=1 00:05:31.482 --rc genhtml_legend=1 00:05:31.482 --rc geninfo_all_blocks=1 00:05:31.482 --rc geninfo_unexecuted_blocks=1 00:05:31.482 00:05:31.482 ' 00:05:31.482 07:14:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:31.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.482 --rc genhtml_branch_coverage=1 00:05:31.482 --rc genhtml_function_coverage=1 00:05:31.482 --rc genhtml_legend=1 00:05:31.482 --rc geninfo_all_blocks=1 00:05:31.482 --rc geninfo_unexecuted_blocks=1 00:05:31.482 00:05:31.482 ' 00:05:31.482 07:14:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:31.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.482 --rc genhtml_branch_coverage=1 00:05:31.482 --rc genhtml_function_coverage=1 00:05:31.482 --rc genhtml_legend=1 00:05:31.482 --rc geninfo_all_blocks=1 00:05:31.482 --rc geninfo_unexecuted_blocks=1 00:05:31.482 00:05:31.482 ' 00:05:31.482 07:14:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:31.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.482 --rc genhtml_branch_coverage=1 00:05:31.482 --rc genhtml_function_coverage=1 00:05:31.482 --rc genhtml_legend=1 00:05:31.482 --rc geninfo_all_blocks=1 00:05:31.482 --rc geninfo_unexecuted_blocks=1 00:05:31.482 00:05:31.482 ' 00:05:31.482 07:14:53 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:31.482 07:14:53 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:31.482 07:14:53 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:31.482 07:14:53 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:31.482 07:14:53 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:31.482 07:14:53 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:31.482 07:14:53 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:31.482 07:14:53 -- setup/common.sh@18 -- # local node= 00:05:31.482 07:14:53 -- setup/common.sh@19 -- # local var val 00:05:31.482 07:14:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:31.482 07:14:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:31.482 07:14:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:31.482 07:14:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:31.482 07:14:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:31.482 07:14:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:31.482 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.482 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.482 07:14:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 4543276 kB' 'MemAvailable: 7335280 kB' 'Buffers: 2684 kB' 'Cached: 2995312 kB' 'SwapCached: 0 kB' 'Active: 455044 kB' 'Inactive: 2659624 kB' 'Active(anon): 127184 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659624 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 118368 kB' 'Mapped: 51328 kB' 'Shmem: 10512 kB' 'KReclaimable: 82936 kB' 'Slab: 184444 kB' 'SReclaimable: 82936 kB' 'SUnreclaim: 101508 kB' 'KernelStack: 6736 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411012 kB' 'Committed_AS: 308324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:31.482 07:14:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.482 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.482 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.482 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.482 07:14:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.482 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.482 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.482 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.482 07:14:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.482 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.482 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.483 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.483 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # continue 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:31.484 07:14:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:31.484 07:14:53 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:31.484 07:14:53 -- setup/common.sh@33 -- # echo 2048 00:05:31.484 07:14:53 -- setup/common.sh@33 -- # return 0 00:05:31.484 07:14:53 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:31.484 07:14:53 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:31.484 07:14:53 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:31.484 07:14:53 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:31.484 07:14:53 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:31.484 07:14:53 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:31.484 07:14:53 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:31.484 07:14:53 -- setup/hugepages.sh@207 -- # get_nodes 00:05:31.484 07:14:53 -- setup/hugepages.sh@27 -- # local node 00:05:31.484 07:14:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:31.484 07:14:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:31.484 07:14:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:31.484 07:14:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:31.484 07:14:53 -- setup/hugepages.sh@208 -- # clear_hp 00:05:31.484 07:14:53 -- setup/hugepages.sh@37 -- # local node hp 00:05:31.484 07:14:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:31.484 07:14:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:31.484 07:14:53 -- setup/hugepages.sh@41 -- # echo 0 00:05:31.484 07:14:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:31.484 07:14:53 -- setup/hugepages.sh@41 -- # echo 0 00:05:31.484 07:14:53 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:31.484 07:14:53 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:31.484 07:14:53 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:31.484 07:14:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.484 07:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.484 07:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:31.484 ************************************ 00:05:31.484 START TEST default_setup 00:05:31.484 ************************************ 00:05:31.484 07:14:53 -- common/autotest_common.sh@1114 -- # default_setup 00:05:31.484 07:14:53 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:31.484 07:14:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:31.484 07:14:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:31.484 07:14:53 -- setup/hugepages.sh@51 -- # shift 00:05:31.484 07:14:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:31.484 07:14:53 -- setup/hugepages.sh@52 -- # local node_ids 00:05:31.484 07:14:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:31.484 07:14:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:31.484 07:14:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:31.484 07:14:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:31.484 07:14:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:31.484 07:14:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:31.484 07:14:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:31.484 07:14:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:31.484 07:14:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:31.484 07:14:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:31.484 07:14:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:31.484 07:14:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:31.484 07:14:53 -- setup/hugepages.sh@73 -- # return 0 00:05:31.484 07:14:53 -- setup/hugepages.sh@137 -- # setup output 00:05:31.484 07:14:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.484 07:14:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.423 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.423 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.423 07:14:54 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:32.423 07:14:54 -- setup/hugepages.sh@89 -- # local node 00:05:32.423 07:14:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:32.423 07:14:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:32.423 07:14:54 -- setup/hugepages.sh@92 -- # local surp 00:05:32.423 07:14:54 -- setup/hugepages.sh@93 -- # local resv 00:05:32.423 07:14:54 -- setup/hugepages.sh@94 -- # local anon 00:05:32.423 07:14:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:32.423 07:14:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:32.423 07:14:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:32.423 07:14:54 -- setup/common.sh@18 -- # local node= 00:05:32.423 07:14:54 -- setup/common.sh@19 -- # local var val 00:05:32.423 07:14:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:32.423 07:14:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.423 07:14:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.423 07:14:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.423 07:14:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.423 07:14:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6599068 kB' 'MemAvailable: 9390896 kB' 'Buffers: 2684 kB' 'Cached: 2995304 kB' 'SwapCached: 0 kB' 'Active: 456344 kB' 'Inactive: 2659636 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119624 kB' 'Mapped: 51244 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184336 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101780 kB' 'KernelStack: 6736 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.423 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.423 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:32.424 07:14:54 -- setup/common.sh@33 -- # echo 0 00:05:32.424 07:14:54 -- setup/common.sh@33 -- # return 0 00:05:32.424 07:14:54 -- setup/hugepages.sh@97 -- # anon=0 00:05:32.424 07:14:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:32.424 07:14:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.424 07:14:54 -- setup/common.sh@18 -- # local node= 00:05:32.424 07:14:54 -- setup/common.sh@19 -- # local var val 00:05:32.424 07:14:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:32.424 07:14:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.424 07:14:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.424 07:14:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.424 07:14:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.424 07:14:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6599068 kB' 'MemAvailable: 9390896 kB' 'Buffers: 2684 kB' 'Cached: 2995304 kB' 'SwapCached: 0 kB' 'Active: 456036 kB' 'Inactive: 2659636 kB' 'Active(anon): 128176 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119568 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184320 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101764 kB' 'KernelStack: 6720 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.424 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.424 07:14:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.425 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.425 07:14:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.425 07:14:54 -- setup/common.sh@33 -- # echo 0 00:05:32.425 07:14:54 -- setup/common.sh@33 -- # return 0 00:05:32.425 07:14:54 -- setup/hugepages.sh@99 -- # surp=0 00:05:32.426 07:14:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:32.426 07:14:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:32.426 07:14:54 -- setup/common.sh@18 -- # local node= 00:05:32.426 07:14:54 -- setup/common.sh@19 -- # local var val 00:05:32.426 07:14:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:32.426 07:14:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.426 07:14:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.426 07:14:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.426 07:14:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.426 07:14:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.426 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.426 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.426 07:14:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6599068 kB' 'MemAvailable: 9390896 kB' 'Buffers: 2684 kB' 'Cached: 2995304 kB' 'SwapCached: 0 kB' 'Active: 456204 kB' 'Inactive: 2659636 kB' 'Active(anon): 128344 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119240 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184332 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101776 kB' 'KernelStack: 6736 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.687 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.687 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.688 07:14:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:32.688 07:14:54 -- setup/common.sh@33 -- # echo 0 00:05:32.688 07:14:54 -- setup/common.sh@33 -- # return 0 00:05:32.688 07:14:54 -- setup/hugepages.sh@100 -- # resv=0 00:05:32.688 07:14:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:32.688 nr_hugepages=1024 00:05:32.688 resv_hugepages=0 00:05:32.688 surplus_hugepages=0 00:05:32.688 anon_hugepages=0 00:05:32.688 07:14:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:32.688 07:14:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:32.688 07:14:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:32.688 07:14:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:32.688 07:14:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:32.688 07:14:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:32.688 07:14:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:32.688 07:14:54 -- setup/common.sh@18 -- # local node= 00:05:32.688 07:14:54 -- setup/common.sh@19 -- # local var val 00:05:32.688 07:14:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:32.688 07:14:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.688 07:14:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:32.688 07:14:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:32.688 07:14:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.688 07:14:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.688 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6599324 kB' 'MemAvailable: 9391152 kB' 'Buffers: 2684 kB' 'Cached: 2995304 kB' 'SwapCached: 0 kB' 'Active: 455892 kB' 'Inactive: 2659636 kB' 'Active(anon): 128032 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119152 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184332 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101776 kB' 'KernelStack: 6704 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.689 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.689 07:14:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:32.690 07:14:54 -- setup/common.sh@33 -- # echo 1024 00:05:32.690 07:14:54 -- setup/common.sh@33 -- # return 0 00:05:32.690 07:14:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:32.690 07:14:54 -- setup/hugepages.sh@112 -- # get_nodes 00:05:32.690 07:14:54 -- setup/hugepages.sh@27 -- # local node 00:05:32.690 07:14:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:32.690 07:14:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:32.690 07:14:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:32.690 07:14:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:32.690 07:14:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:32.690 07:14:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:32.690 07:14:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:32.690 07:14:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:32.690 07:14:54 -- setup/common.sh@18 -- # local node=0 00:05:32.690 07:14:54 -- setup/common.sh@19 -- # local var val 00:05:32.690 07:14:54 -- setup/common.sh@20 -- # local mem_f mem 00:05:32.690 07:14:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:32.690 07:14:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:32.690 07:14:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:32.690 07:14:54 -- setup/common.sh@28 -- # mapfile -t mem 00:05:32.690 07:14:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6599324 kB' 'MemUsed: 5639796 kB' 'SwapCached: 0 kB' 'Active: 456116 kB' 'Inactive: 2659636 kB' 'Active(anon): 128256 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659636 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2997988 kB' 'Mapped: 50936 kB' 'AnonPages: 119368 kB' 'Shmem: 10488 kB' 'KernelStack: 6688 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 184332 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.690 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.690 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # continue 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # IFS=': ' 00:05:32.691 07:14:54 -- setup/common.sh@31 -- # read -r var val _ 00:05:32.691 07:14:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:32.691 07:14:54 -- setup/common.sh@33 -- # echo 0 00:05:32.691 07:14:54 -- setup/common.sh@33 -- # return 0 00:05:32.691 07:14:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:32.691 07:14:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:32.691 07:14:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:32.691 07:14:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:32.691 node0=1024 expecting 1024 00:05:32.691 ************************************ 00:05:32.691 END TEST default_setup 00:05:32.691 ************************************ 00:05:32.691 07:14:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:32.691 07:14:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:32.691 00:05:32.691 real 0m1.089s 00:05:32.691 user 0m0.499s 00:05:32.691 sys 0m0.490s 00:05:32.691 07:14:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:32.691 07:14:54 -- common/autotest_common.sh@10 -- # set +x 00:05:32.691 07:14:54 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:32.691 07:14:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.691 07:14:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.691 07:14:54 -- common/autotest_common.sh@10 -- # set +x 00:05:32.691 ************************************ 00:05:32.691 START TEST per_node_1G_alloc 00:05:32.691 ************************************ 00:05:32.691 07:14:54 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:32.691 07:14:54 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:32.691 07:14:54 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:32.691 07:14:54 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:32.691 07:14:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:32.691 07:14:54 -- setup/hugepages.sh@51 -- # shift 00:05:32.691 07:14:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:32.691 07:14:54 -- setup/hugepages.sh@52 -- # local node_ids 00:05:32.691 07:14:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:32.691 07:14:54 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:32.691 07:14:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:32.691 07:14:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:32.691 07:14:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:32.691 07:14:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:32.691 07:14:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:32.691 07:14:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:32.691 07:14:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:32.691 07:14:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:32.691 07:14:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:32.691 07:14:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:32.691 07:14:54 -- setup/hugepages.sh@73 -- # return 0 00:05:32.691 07:14:54 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:32.691 07:14:54 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:32.691 07:14:54 -- setup/hugepages.sh@146 -- # setup output 00:05:32.691 07:14:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.691 07:14:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.949 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:32.949 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.211 07:14:55 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:33.211 07:14:55 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:33.211 07:14:55 -- setup/hugepages.sh@89 -- # local node 00:05:33.211 07:14:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:33.211 07:14:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:33.211 07:14:55 -- setup/hugepages.sh@92 -- # local surp 00:05:33.211 07:14:55 -- setup/hugepages.sh@93 -- # local resv 00:05:33.211 07:14:55 -- setup/hugepages.sh@94 -- # local anon 00:05:33.211 07:14:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:33.211 07:14:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:33.211 07:14:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:33.211 07:14:55 -- setup/common.sh@18 -- # local node= 00:05:33.211 07:14:55 -- setup/common.sh@19 -- # local var val 00:05:33.211 07:14:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.211 07:14:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.211 07:14:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.211 07:14:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.211 07:14:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.211 07:14:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.211 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:14:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7708032 kB' 'MemAvailable: 10499868 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456308 kB' 'Inactive: 2659644 kB' 'Active(anon): 128448 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119544 kB' 'Mapped: 51052 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184396 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101840 kB' 'KernelStack: 6752 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 319704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:33.211 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:14:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.211 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.211 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:14:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.211 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.211 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.211 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.211 07:14:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.212 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.212 07:14:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.212 07:14:55 -- setup/common.sh@33 -- # echo 0 00:05:33.212 07:14:55 -- setup/common.sh@33 -- # return 0 00:05:33.212 07:14:55 -- setup/hugepages.sh@97 -- # anon=0 00:05:33.212 07:14:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:33.212 07:14:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.212 07:14:55 -- setup/common.sh@18 -- # local node= 00:05:33.213 07:14:55 -- setup/common.sh@19 -- # local var val 00:05:33.213 07:14:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.213 07:14:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.213 07:14:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.213 07:14:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.213 07:14:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.213 07:14:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7708032 kB' 'MemAvailable: 10499868 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456260 kB' 'Inactive: 2659644 kB' 'Active(anon): 128400 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119500 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184388 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101832 kB' 'KernelStack: 6736 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 319704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.213 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.213 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.214 07:14:55 -- setup/common.sh@33 -- # echo 0 00:05:33.214 07:14:55 -- setup/common.sh@33 -- # return 0 00:05:33.214 07:14:55 -- setup/hugepages.sh@99 -- # surp=0 00:05:33.214 07:14:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:33.214 07:14:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:33.214 07:14:55 -- setup/common.sh@18 -- # local node= 00:05:33.214 07:14:55 -- setup/common.sh@19 -- # local var val 00:05:33.214 07:14:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.214 07:14:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.214 07:14:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.214 07:14:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.214 07:14:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.214 07:14:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7708032 kB' 'MemAvailable: 10499868 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456232 kB' 'Inactive: 2659644 kB' 'Active(anon): 128372 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119500 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184372 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101816 kB' 'KernelStack: 6736 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 319704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.214 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.214 07:14:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.215 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.215 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.216 07:14:55 -- setup/common.sh@33 -- # echo 0 00:05:33.216 07:14:55 -- setup/common.sh@33 -- # return 0 00:05:33.216 nr_hugepages=512 00:05:33.216 resv_hugepages=0 00:05:33.216 surplus_hugepages=0 00:05:33.216 anon_hugepages=0 00:05:33.216 07:14:55 -- setup/hugepages.sh@100 -- # resv=0 00:05:33.216 07:14:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:33.216 07:14:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:33.216 07:14:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:33.216 07:14:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:33.216 07:14:55 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:33.216 07:14:55 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:33.216 07:14:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:33.216 07:14:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:33.216 07:14:55 -- setup/common.sh@18 -- # local node= 00:05:33.216 07:14:55 -- setup/common.sh@19 -- # local var val 00:05:33.216 07:14:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.216 07:14:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.216 07:14:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.216 07:14:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.216 07:14:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.216 07:14:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7708032 kB' 'MemAvailable: 10499868 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456280 kB' 'Inactive: 2659644 kB' 'Active(anon): 128420 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119500 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184364 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101808 kB' 'KernelStack: 6736 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 319704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55416 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.216 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.216 07:14:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.217 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.217 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.217 07:14:55 -- setup/common.sh@33 -- # echo 512 00:05:33.217 07:14:55 -- setup/common.sh@33 -- # return 0 00:05:33.217 07:14:55 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:33.217 07:14:55 -- setup/hugepages.sh@112 -- # get_nodes 00:05:33.217 07:14:55 -- setup/hugepages.sh@27 -- # local node 00:05:33.217 07:14:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:33.217 07:14:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:33.217 07:14:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:33.217 07:14:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:33.217 07:14:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:33.217 07:14:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:33.217 07:14:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:33.217 07:14:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.217 07:14:55 -- setup/common.sh@18 -- # local node=0 00:05:33.217 07:14:55 -- setup/common.sh@19 -- # local var val 00:05:33.218 07:14:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.218 07:14:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.218 07:14:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:33.218 07:14:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:33.218 07:14:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.218 07:14:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7708032 kB' 'MemUsed: 4531088 kB' 'SwapCached: 0 kB' 'Active: 456228 kB' 'Inactive: 2659644 kB' 'Active(anon): 128368 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2997992 kB' 'Mapped: 50936 kB' 'AnonPages: 119508 kB' 'Shmem: 10488 kB' 'KernelStack: 6736 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 184360 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.218 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.218 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.219 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.219 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.219 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.219 07:14:55 -- setup/common.sh@33 -- # echo 0 00:05:33.219 07:14:55 -- setup/common.sh@33 -- # return 0 00:05:33.219 07:14:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:33.219 07:14:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:33.219 node0=512 expecting 512 00:05:33.219 07:14:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:33.219 07:14:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:33.219 07:14:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:33.219 07:14:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:33.219 00:05:33.219 real 0m0.573s 00:05:33.219 user 0m0.269s 00:05:33.219 sys 0m0.303s 00:05:33.219 07:14:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.219 07:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:33.219 ************************************ 00:05:33.219 END TEST per_node_1G_alloc 00:05:33.219 ************************************ 00:05:33.219 07:14:55 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:33.219 07:14:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.219 07:14:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.219 07:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:33.219 ************************************ 00:05:33.219 START TEST even_2G_alloc 00:05:33.219 ************************************ 00:05:33.219 07:14:55 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:33.219 07:14:55 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:33.219 07:14:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:33.219 07:14:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:33.219 07:14:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:33.219 07:14:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:33.219 07:14:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:33.219 07:14:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:33.219 07:14:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:33.219 07:14:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:33.219 07:14:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:33.219 07:14:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:33.219 07:14:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:33.219 07:14:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:33.219 07:14:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:33.219 07:14:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:33.219 07:14:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:33.219 07:14:55 -- setup/hugepages.sh@83 -- # : 0 00:05:33.219 07:14:55 -- setup/hugepages.sh@84 -- # : 0 00:05:33.219 07:14:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:33.219 07:14:55 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:33.219 07:14:55 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:33.219 07:14:55 -- setup/hugepages.sh@153 -- # setup output 00:05:33.219 07:14:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.219 07:14:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.791 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.791 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:33.791 07:14:55 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:33.791 07:14:55 -- setup/hugepages.sh@89 -- # local node 00:05:33.791 07:14:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:33.791 07:14:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:33.791 07:14:55 -- setup/hugepages.sh@92 -- # local surp 00:05:33.791 07:14:55 -- setup/hugepages.sh@93 -- # local resv 00:05:33.791 07:14:55 -- setup/hugepages.sh@94 -- # local anon 00:05:33.791 07:14:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:33.791 07:14:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:33.791 07:14:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:33.791 07:14:55 -- setup/common.sh@18 -- # local node= 00:05:33.791 07:14:55 -- setup/common.sh@19 -- # local var val 00:05:33.791 07:14:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.791 07:14:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.791 07:14:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.791 07:14:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.791 07:14:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.791 07:14:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6655472 kB' 'MemAvailable: 9447308 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456344 kB' 'Inactive: 2659644 kB' 'Active(anon): 128484 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119568 kB' 'Mapped: 51012 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184460 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101904 kB' 'KernelStack: 6728 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55496 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.791 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.791 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:33.792 07:14:55 -- setup/common.sh@33 -- # echo 0 00:05:33.792 07:14:55 -- setup/common.sh@33 -- # return 0 00:05:33.792 07:14:55 -- setup/hugepages.sh@97 -- # anon=0 00:05:33.792 07:14:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:33.792 07:14:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.792 07:14:55 -- setup/common.sh@18 -- # local node= 00:05:33.792 07:14:55 -- setup/common.sh@19 -- # local var val 00:05:33.792 07:14:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.792 07:14:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.792 07:14:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.792 07:14:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.792 07:14:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.792 07:14:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6655472 kB' 'MemAvailable: 9447308 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456280 kB' 'Inactive: 2659644 kB' 'Active(anon): 128420 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119528 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184476 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101920 kB' 'KernelStack: 6736 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.792 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.792 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.793 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.793 07:14:55 -- setup/common.sh@33 -- # echo 0 00:05:33.793 07:14:55 -- setup/common.sh@33 -- # return 0 00:05:33.793 07:14:55 -- setup/hugepages.sh@99 -- # surp=0 00:05:33.793 07:14:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:33.793 07:14:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:33.793 07:14:55 -- setup/common.sh@18 -- # local node= 00:05:33.793 07:14:55 -- setup/common.sh@19 -- # local var val 00:05:33.793 07:14:55 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.793 07:14:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.793 07:14:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.793 07:14:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.793 07:14:55 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.793 07:14:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.793 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6655472 kB' 'MemAvailable: 9447308 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456412 kB' 'Inactive: 2659644 kB' 'Active(anon): 128552 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119636 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184476 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101920 kB' 'KernelStack: 6720 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.794 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.794 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:55 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:33.795 07:14:55 -- setup/common.sh@33 -- # echo 0 00:05:33.795 07:14:55 -- setup/common.sh@33 -- # return 0 00:05:33.795 nr_hugepages=1024 00:05:33.795 resv_hugepages=0 00:05:33.795 surplus_hugepages=0 00:05:33.795 anon_hugepages=0 00:05:33.795 07:14:55 -- setup/hugepages.sh@100 -- # resv=0 00:05:33.795 07:14:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:33.795 07:14:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:33.795 07:14:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:33.795 07:14:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:33.795 07:14:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:33.795 07:14:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:33.795 07:14:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:33.795 07:14:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:33.795 07:14:56 -- setup/common.sh@18 -- # local node= 00:05:33.795 07:14:56 -- setup/common.sh@19 -- # local var val 00:05:33.795 07:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.795 07:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.795 07:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:33.795 07:14:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:33.795 07:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.795 07:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6655472 kB' 'MemAvailable: 9447308 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456168 kB' 'Inactive: 2659644 kB' 'Active(anon): 128308 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'AnonPages: 119392 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184464 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101908 kB' 'KernelStack: 6720 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 319904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.795 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.795 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.796 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:33.796 07:14:56 -- setup/common.sh@33 -- # echo 1024 00:05:33.796 07:14:56 -- setup/common.sh@33 -- # return 0 00:05:33.796 07:14:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:33.796 07:14:56 -- setup/hugepages.sh@112 -- # get_nodes 00:05:33.796 07:14:56 -- setup/hugepages.sh@27 -- # local node 00:05:33.796 07:14:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:33.796 07:14:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:33.796 07:14:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:33.796 07:14:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:33.796 07:14:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:33.796 07:14:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:33.796 07:14:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:33.796 07:14:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:33.796 07:14:56 -- setup/common.sh@18 -- # local node=0 00:05:33.796 07:14:56 -- setup/common.sh@19 -- # local var val 00:05:33.796 07:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:33.796 07:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:33.796 07:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:33.796 07:14:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:33.796 07:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:33.796 07:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.796 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6655220 kB' 'MemUsed: 5583900 kB' 'SwapCached: 0 kB' 'Active: 456680 kB' 'Inactive: 2659644 kB' 'Active(anon): 128820 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 336 kB' 'Writeback: 0 kB' 'FilePages: 2997992 kB' 'Mapped: 50936 kB' 'AnonPages: 119976 kB' 'Shmem: 10488 kB' 'KernelStack: 6720 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 184444 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:33.797 07:14:56 -- setup/common.sh@32 -- # continue 00:05:33.797 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.105 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.105 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.105 07:14:56 -- setup/common.sh@33 -- # echo 0 00:05:34.105 07:14:56 -- setup/common.sh@33 -- # return 0 00:05:34.105 07:14:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:34.105 07:14:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:34.105 07:14:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:34.105 07:14:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:34.105 node0=1024 expecting 1024 00:05:34.105 ************************************ 00:05:34.105 END TEST even_2G_alloc 00:05:34.105 ************************************ 00:05:34.105 07:14:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:34.105 07:14:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:34.105 00:05:34.105 real 0m0.588s 00:05:34.105 user 0m0.275s 00:05:34.105 sys 0m0.313s 00:05:34.105 07:14:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.105 07:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:34.105 07:14:56 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:34.105 07:14:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.105 07:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.105 07:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:34.105 ************************************ 00:05:34.105 START TEST odd_alloc 00:05:34.105 ************************************ 00:05:34.105 07:14:56 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:34.105 07:14:56 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:34.105 07:14:56 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:34.105 07:14:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:34.105 07:14:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:34.105 07:14:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:34.105 07:14:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:34.105 07:14:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:34.105 07:14:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:34.105 07:14:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:34.105 07:14:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:34.105 07:14:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:34.105 07:14:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:34.105 07:14:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:34.105 07:14:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:34.105 07:14:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.105 07:14:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:34.105 07:14:56 -- setup/hugepages.sh@83 -- # : 0 00:05:34.105 07:14:56 -- setup/hugepages.sh@84 -- # : 0 00:05:34.105 07:14:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.105 07:14:56 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:34.105 07:14:56 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:34.105 07:14:56 -- setup/hugepages.sh@160 -- # setup output 00:05:34.105 07:14:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.105 07:14:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.383 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.383 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.383 07:14:56 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:34.383 07:14:56 -- setup/hugepages.sh@89 -- # local node 00:05:34.383 07:14:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:34.383 07:14:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:34.383 07:14:56 -- setup/hugepages.sh@92 -- # local surp 00:05:34.383 07:14:56 -- setup/hugepages.sh@93 -- # local resv 00:05:34.383 07:14:56 -- setup/hugepages.sh@94 -- # local anon 00:05:34.383 07:14:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:34.383 07:14:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:34.383 07:14:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:34.383 07:14:56 -- setup/common.sh@18 -- # local node= 00:05:34.383 07:14:56 -- setup/common.sh@19 -- # local var val 00:05:34.383 07:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.383 07:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.383 07:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.383 07:14:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.383 07:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.383 07:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6656632 kB' 'MemAvailable: 9448468 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456492 kB' 'Inactive: 2659644 kB' 'Active(anon): 128632 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119732 kB' 'Mapped: 51064 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184468 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101912 kB' 'KernelStack: 6744 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 320036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.383 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.383 07:14:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.384 07:14:56 -- setup/common.sh@33 -- # echo 0 00:05:34.384 07:14:56 -- setup/common.sh@33 -- # return 0 00:05:34.384 07:14:56 -- setup/hugepages.sh@97 -- # anon=0 00:05:34.384 07:14:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:34.384 07:14:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.384 07:14:56 -- setup/common.sh@18 -- # local node= 00:05:34.384 07:14:56 -- setup/common.sh@19 -- # local var val 00:05:34.384 07:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.384 07:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.384 07:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.384 07:14:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.384 07:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.384 07:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6656380 kB' 'MemAvailable: 9448216 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 455964 kB' 'Inactive: 2659644 kB' 'Active(anon): 128104 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119248 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184488 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101932 kB' 'KernelStack: 6736 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 320036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.384 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.384 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.385 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.385 07:14:56 -- setup/common.sh@33 -- # echo 0 00:05:34.385 07:14:56 -- setup/common.sh@33 -- # return 0 00:05:34.385 07:14:56 -- setup/hugepages.sh@99 -- # surp=0 00:05:34.385 07:14:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:34.385 07:14:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:34.385 07:14:56 -- setup/common.sh@18 -- # local node= 00:05:34.385 07:14:56 -- setup/common.sh@19 -- # local var val 00:05:34.385 07:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.385 07:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.385 07:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.385 07:14:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.385 07:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.385 07:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.385 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6656380 kB' 'MemAvailable: 9448216 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456184 kB' 'Inactive: 2659644 kB' 'Active(anon): 128324 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119404 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184480 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101924 kB' 'KernelStack: 6720 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 320036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.386 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.386 07:14:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:34.387 07:14:56 -- setup/common.sh@33 -- # echo 0 00:05:34.387 07:14:56 -- setup/common.sh@33 -- # return 0 00:05:34.387 nr_hugepages=1025 00:05:34.387 resv_hugepages=0 00:05:34.387 surplus_hugepages=0 00:05:34.387 anon_hugepages=0 00:05:34.387 07:14:56 -- setup/hugepages.sh@100 -- # resv=0 00:05:34.387 07:14:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:34.387 07:14:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:34.387 07:14:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:34.387 07:14:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:34.387 07:14:56 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:34.387 07:14:56 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:34.387 07:14:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:34.387 07:14:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:34.387 07:14:56 -- setup/common.sh@18 -- # local node= 00:05:34.387 07:14:56 -- setup/common.sh@19 -- # local var val 00:05:34.387 07:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.387 07:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.387 07:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.387 07:14:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.387 07:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.387 07:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6656660 kB' 'MemAvailable: 9448496 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456388 kB' 'Inactive: 2659644 kB' 'Active(anon): 128528 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119704 kB' 'Mapped: 51196 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184476 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101920 kB' 'KernelStack: 6784 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458564 kB' 'Committed_AS: 322340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.387 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.387 07:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.388 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.388 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.649 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.649 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:34.649 07:14:56 -- setup/common.sh@33 -- # echo 1025 00:05:34.649 07:14:56 -- setup/common.sh@33 -- # return 0 00:05:34.649 07:14:56 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:34.649 07:14:56 -- setup/hugepages.sh@112 -- # get_nodes 00:05:34.649 07:14:56 -- setup/hugepages.sh@27 -- # local node 00:05:34.649 07:14:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:34.649 07:14:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:34.649 07:14:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:34.649 07:14:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:34.650 07:14:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:34.650 07:14:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:34.650 07:14:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:34.650 07:14:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:34.650 07:14:56 -- setup/common.sh@18 -- # local node=0 00:05:34.650 07:14:56 -- setup/common.sh@19 -- # local var val 00:05:34.650 07:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.650 07:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.650 07:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:34.650 07:14:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:34.650 07:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.650 07:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6656660 kB' 'MemUsed: 5582460 kB' 'SwapCached: 0 kB' 'Active: 456504 kB' 'Inactive: 2659644 kB' 'Active(anon): 128644 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2997992 kB' 'Mapped: 50936 kB' 'AnonPages: 119888 kB' 'Shmem: 10488 kB' 'KernelStack: 6784 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 184464 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.650 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.650 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # continue 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.651 07:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.651 07:14:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:34.651 07:14:56 -- setup/common.sh@33 -- # echo 0 00:05:34.651 07:14:56 -- setup/common.sh@33 -- # return 0 00:05:34.651 07:14:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:34.651 07:14:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:34.651 07:14:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:34.651 07:14:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:34.651 node0=1025 expecting 1025 00:05:34.651 ************************************ 00:05:34.651 END TEST odd_alloc 00:05:34.651 ************************************ 00:05:34.651 07:14:56 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:34.651 07:14:56 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:34.651 00:05:34.651 real 0m0.577s 00:05:34.651 user 0m0.266s 00:05:34.651 sys 0m0.326s 00:05:34.651 07:14:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:34.651 07:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:34.651 07:14:56 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:34.651 07:14:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.651 07:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.651 07:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:34.651 ************************************ 00:05:34.651 START TEST custom_alloc 00:05:34.651 ************************************ 00:05:34.651 07:14:56 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:34.651 07:14:56 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:34.651 07:14:56 -- setup/hugepages.sh@169 -- # local node 00:05:34.651 07:14:56 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:34.651 07:14:56 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:34.651 07:14:56 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:34.651 07:14:56 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:34.651 07:14:56 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:34.651 07:14:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:34.651 07:14:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:34.651 07:14:56 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:34.651 07:14:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:34.651 07:14:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:34.651 07:14:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:34.651 07:14:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:34.651 07:14:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:34.651 07:14:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:34.651 07:14:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:34.651 07:14:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:34.651 07:14:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:34.652 07:14:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.652 07:14:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:34.652 07:14:56 -- setup/hugepages.sh@83 -- # : 0 00:05:34.652 07:14:56 -- setup/hugepages.sh@84 -- # : 0 00:05:34.652 07:14:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:34.652 07:14:56 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:34.652 07:14:56 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:34.652 07:14:56 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:34.652 07:14:56 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:34.652 07:14:56 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:34.652 07:14:56 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:34.652 07:14:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:34.652 07:14:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:34.652 07:14:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:34.652 07:14:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:34.652 07:14:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:34.652 07:14:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:34.652 07:14:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:34.652 07:14:56 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:34.652 07:14:56 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:34.652 07:14:56 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:34.652 07:14:56 -- setup/hugepages.sh@78 -- # return 0 00:05:34.652 07:14:56 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:34.652 07:14:56 -- setup/hugepages.sh@187 -- # setup output 00:05:34.652 07:14:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.652 07:14:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.913 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.913 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.913 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:34.913 07:14:57 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:34.913 07:14:57 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:34.913 07:14:57 -- setup/hugepages.sh@89 -- # local node 00:05:34.913 07:14:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:34.913 07:14:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:34.913 07:14:57 -- setup/hugepages.sh@92 -- # local surp 00:05:34.913 07:14:57 -- setup/hugepages.sh@93 -- # local resv 00:05:34.913 07:14:57 -- setup/hugepages.sh@94 -- # local anon 00:05:34.913 07:14:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:34.913 07:14:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:34.913 07:14:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:34.913 07:14:57 -- setup/common.sh@18 -- # local node= 00:05:34.913 07:14:57 -- setup/common.sh@19 -- # local var val 00:05:34.913 07:14:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:34.913 07:14:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:34.913 07:14:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:34.913 07:14:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:34.913 07:14:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:34.913 07:14:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7709776 kB' 'MemAvailable: 10501612 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456308 kB' 'Inactive: 2659644 kB' 'Active(anon): 128448 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119496 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184456 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101900 kB' 'KernelStack: 6720 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55464 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.913 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.913 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:34.914 07:14:57 -- setup/common.sh@32 -- # continue 00:05:34.914 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.175 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.175 07:14:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.175 07:14:57 -- setup/common.sh@33 -- # echo 0 00:05:35.175 07:14:57 -- setup/common.sh@33 -- # return 0 00:05:35.175 07:14:57 -- setup/hugepages.sh@97 -- # anon=0 00:05:35.175 07:14:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:35.175 07:14:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.175 07:14:57 -- setup/common.sh@18 -- # local node= 00:05:35.176 07:14:57 -- setup/common.sh@19 -- # local var val 00:05:35.176 07:14:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.176 07:14:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.176 07:14:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.176 07:14:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.176 07:14:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.176 07:14:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7710340 kB' 'MemAvailable: 10502176 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456248 kB' 'Inactive: 2659644 kB' 'Active(anon): 128388 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119532 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184456 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101900 kB' 'KernelStack: 6736 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.176 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.176 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.177 07:14:57 -- setup/common.sh@33 -- # echo 0 00:05:35.177 07:14:57 -- setup/common.sh@33 -- # return 0 00:05:35.177 07:14:57 -- setup/hugepages.sh@99 -- # surp=0 00:05:35.177 07:14:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:35.177 07:14:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:35.177 07:14:57 -- setup/common.sh@18 -- # local node= 00:05:35.177 07:14:57 -- setup/common.sh@19 -- # local var val 00:05:35.177 07:14:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.177 07:14:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.177 07:14:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.177 07:14:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.177 07:14:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.177 07:14:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7710652 kB' 'MemAvailable: 10502488 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456204 kB' 'Inactive: 2659644 kB' 'Active(anon): 128344 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119424 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184448 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101892 kB' 'KernelStack: 6720 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.177 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.177 07:14:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.178 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.178 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.178 07:14:57 -- setup/common.sh@33 -- # echo 0 00:05:35.178 07:14:57 -- setup/common.sh@33 -- # return 0 00:05:35.178 nr_hugepages=512 00:05:35.178 07:14:57 -- setup/hugepages.sh@100 -- # resv=0 00:05:35.178 07:14:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:35.178 resv_hugepages=0 00:05:35.178 07:14:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:35.178 surplus_hugepages=0 00:05:35.178 anon_hugepages=0 00:05:35.178 07:14:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:35.178 07:14:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:35.178 07:14:57 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:35.178 07:14:57 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:35.178 07:14:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:35.178 07:14:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:35.178 07:14:57 -- setup/common.sh@18 -- # local node= 00:05:35.178 07:14:57 -- setup/common.sh@19 -- # local var val 00:05:35.178 07:14:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.178 07:14:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.178 07:14:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.178 07:14:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.178 07:14:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.178 07:14:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7710652 kB' 'MemAvailable: 10502488 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 456244 kB' 'Inactive: 2659644 kB' 'Active(anon): 128384 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119468 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 82556 kB' 'Slab: 184448 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101892 kB' 'KernelStack: 6720 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983876 kB' 'Committed_AS: 320036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.179 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.179 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.180 07:14:57 -- setup/common.sh@33 -- # echo 512 00:05:35.180 07:14:57 -- setup/common.sh@33 -- # return 0 00:05:35.180 07:14:57 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:35.180 07:14:57 -- setup/hugepages.sh@112 -- # get_nodes 00:05:35.180 07:14:57 -- setup/hugepages.sh@27 -- # local node 00:05:35.180 07:14:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.180 07:14:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:35.180 07:14:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:35.180 07:14:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:35.180 07:14:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:35.180 07:14:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:35.180 07:14:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:35.180 07:14:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.180 07:14:57 -- setup/common.sh@18 -- # local node=0 00:05:35.180 07:14:57 -- setup/common.sh@19 -- # local var val 00:05:35.180 07:14:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.180 07:14:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.180 07:14:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:35.180 07:14:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:35.180 07:14:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.180 07:14:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 7710652 kB' 'MemUsed: 4528468 kB' 'SwapCached: 0 kB' 'Active: 456008 kB' 'Inactive: 2659644 kB' 'Active(anon): 128148 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2997992 kB' 'Mapped: 50936 kB' 'AnonPages: 119276 kB' 'Shmem: 10488 kB' 'KernelStack: 6736 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82556 kB' 'Slab: 184448 kB' 'SReclaimable: 82556 kB' 'SUnreclaim: 101892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.180 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.180 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.181 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.181 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.181 07:14:57 -- setup/common.sh@33 -- # echo 0 00:05:35.181 07:14:57 -- setup/common.sh@33 -- # return 0 00:05:35.181 07:14:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:35.181 07:14:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:35.181 07:14:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:35.181 07:14:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:35.181 node0=512 expecting 512 00:05:35.181 07:14:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:35.181 ************************************ 00:05:35.181 END TEST custom_alloc 00:05:35.181 ************************************ 00:05:35.181 07:14:57 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:35.181 00:05:35.181 real 0m0.563s 00:05:35.181 user 0m0.266s 00:05:35.181 sys 0m0.311s 00:05:35.181 07:14:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:35.181 07:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:35.181 07:14:57 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:35.181 07:14:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.181 07:14:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.181 07:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:35.181 ************************************ 00:05:35.181 START TEST no_shrink_alloc 00:05:35.181 ************************************ 00:05:35.181 07:14:57 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:35.181 07:14:57 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:35.181 07:14:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:35.181 07:14:57 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:35.181 07:14:57 -- setup/hugepages.sh@51 -- # shift 00:05:35.181 07:14:57 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:35.181 07:14:57 -- setup/hugepages.sh@52 -- # local node_ids 00:05:35.181 07:14:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:35.181 07:14:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:35.181 07:14:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:35.181 07:14:57 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:35.181 07:14:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:35.181 07:14:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:35.181 07:14:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:35.181 07:14:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:35.181 07:14:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:35.181 07:14:57 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:35.181 07:14:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:35.181 07:14:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:35.181 07:14:57 -- setup/hugepages.sh@73 -- # return 0 00:05:35.181 07:14:57 -- setup/hugepages.sh@198 -- # setup output 00:05:35.181 07:14:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.181 07:14:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.440 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.704 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.704 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:35.704 07:14:57 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:35.704 07:14:57 -- setup/hugepages.sh@89 -- # local node 00:05:35.704 07:14:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:35.704 07:14:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:35.704 07:14:57 -- setup/hugepages.sh@92 -- # local surp 00:05:35.704 07:14:57 -- setup/hugepages.sh@93 -- # local resv 00:05:35.704 07:14:57 -- setup/hugepages.sh@94 -- # local anon 00:05:35.704 07:14:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:35.704 07:14:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:35.704 07:14:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:35.704 07:14:57 -- setup/common.sh@18 -- # local node= 00:05:35.704 07:14:57 -- setup/common.sh@19 -- # local var val 00:05:35.704 07:14:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.704 07:14:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.704 07:14:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.704 07:14:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.704 07:14:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.704 07:14:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6663728 kB' 'MemAvailable: 9455560 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 453940 kB' 'Inactive: 2659644 kB' 'Active(anon): 126080 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117384 kB' 'Mapped: 50408 kB' 'Shmem: 10488 kB' 'KReclaimable: 82552 kB' 'Slab: 184412 kB' 'SReclaimable: 82552 kB' 'SUnreclaim: 101860 kB' 'KernelStack: 6704 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55384 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.704 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.704 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:35.705 07:14:57 -- setup/common.sh@33 -- # echo 0 00:05:35.705 07:14:57 -- setup/common.sh@33 -- # return 0 00:05:35.705 07:14:57 -- setup/hugepages.sh@97 -- # anon=0 00:05:35.705 07:14:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:35.705 07:14:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.705 07:14:57 -- setup/common.sh@18 -- # local node= 00:05:35.705 07:14:57 -- setup/common.sh@19 -- # local var val 00:05:35.705 07:14:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.705 07:14:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.705 07:14:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.705 07:14:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.705 07:14:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.705 07:14:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6663764 kB' 'MemAvailable: 9455596 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 453512 kB' 'Inactive: 2659644 kB' 'Active(anon): 125652 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117044 kB' 'Mapped: 50124 kB' 'Shmem: 10488 kB' 'KReclaimable: 82548 kB' 'Slab: 184388 kB' 'SReclaimable: 82548 kB' 'SUnreclaim: 101840 kB' 'KernelStack: 6640 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.705 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.705 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.706 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.706 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.706 07:14:57 -- setup/common.sh@33 -- # echo 0 00:05:35.706 07:14:57 -- setup/common.sh@33 -- # return 0 00:05:35.706 07:14:57 -- setup/hugepages.sh@99 -- # surp=0 00:05:35.706 07:14:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:35.706 07:14:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:35.706 07:14:57 -- setup/common.sh@18 -- # local node= 00:05:35.706 07:14:57 -- setup/common.sh@19 -- # local var val 00:05:35.706 07:14:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.706 07:14:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.706 07:14:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.706 07:14:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.706 07:14:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.707 07:14:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6663764 kB' 'MemAvailable: 9455596 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 453800 kB' 'Inactive: 2659644 kB' 'Active(anon): 125940 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117024 kB' 'Mapped: 50124 kB' 'Shmem: 10488 kB' 'KReclaimable: 82548 kB' 'Slab: 184388 kB' 'SReclaimable: 82548 kB' 'SUnreclaim: 101840 kB' 'KernelStack: 6624 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.707 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.707 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:35.708 07:14:57 -- setup/common.sh@33 -- # echo 0 00:05:35.708 07:14:57 -- setup/common.sh@33 -- # return 0 00:05:35.708 nr_hugepages=1024 00:05:35.708 resv_hugepages=0 00:05:35.708 surplus_hugepages=0 00:05:35.708 anon_hugepages=0 00:05:35.708 07:14:57 -- setup/hugepages.sh@100 -- # resv=0 00:05:35.708 07:14:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:35.708 07:14:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:35.708 07:14:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:35.708 07:14:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:35.708 07:14:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:35.708 07:14:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:35.708 07:14:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:35.708 07:14:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:35.708 07:14:57 -- setup/common.sh@18 -- # local node= 00:05:35.708 07:14:57 -- setup/common.sh@19 -- # local var val 00:05:35.708 07:14:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.708 07:14:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.708 07:14:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.708 07:14:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.708 07:14:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.708 07:14:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6663764 kB' 'MemAvailable: 9455596 kB' 'Buffers: 2684 kB' 'Cached: 2995308 kB' 'SwapCached: 0 kB' 'Active: 453760 kB' 'Inactive: 2659644 kB' 'Active(anon): 125900 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116984 kB' 'Mapped: 50124 kB' 'Shmem: 10488 kB' 'KReclaimable: 82548 kB' 'Slab: 184388 kB' 'SReclaimable: 82548 kB' 'SUnreclaim: 101840 kB' 'KernelStack: 6624 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.708 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.708 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.709 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.709 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:35.709 07:14:57 -- setup/common.sh@33 -- # echo 1024 00:05:35.709 07:14:57 -- setup/common.sh@33 -- # return 0 00:05:35.709 07:14:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:35.709 07:14:57 -- setup/hugepages.sh@112 -- # get_nodes 00:05:35.710 07:14:57 -- setup/hugepages.sh@27 -- # local node 00:05:35.710 07:14:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.710 07:14:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:35.710 07:14:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:35.710 07:14:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:35.710 07:14:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:35.710 07:14:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:35.710 07:14:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:35.710 07:14:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:35.710 07:14:57 -- setup/common.sh@18 -- # local node=0 00:05:35.710 07:14:57 -- setup/common.sh@19 -- # local var val 00:05:35.710 07:14:57 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.710 07:14:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.710 07:14:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:35.710 07:14:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:35.710 07:14:57 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.710 07:14:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6663764 kB' 'MemUsed: 5575356 kB' 'SwapCached: 0 kB' 'Active: 453464 kB' 'Inactive: 2659644 kB' 'Active(anon): 125604 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659644 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2997992 kB' 'Mapped: 50124 kB' 'AnonPages: 116948 kB' 'Shmem: 10488 kB' 'KernelStack: 6624 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82548 kB' 'Slab: 184376 kB' 'SReclaimable: 82548 kB' 'SUnreclaim: 101828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.710 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.710 07:14:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.711 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.711 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.711 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.711 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.711 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.711 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.711 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.711 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.711 07:14:57 -- setup/common.sh@32 -- # continue 00:05:35.711 07:14:57 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.711 07:14:57 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.711 07:14:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:35.711 07:14:57 -- setup/common.sh@33 -- # echo 0 00:05:35.711 07:14:57 -- setup/common.sh@33 -- # return 0 00:05:35.711 07:14:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:35.711 07:14:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:35.711 07:14:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:35.711 node0=1024 expecting 1024 00:05:35.711 07:14:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:35.711 07:14:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:35.711 07:14:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:35.711 07:14:57 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:35.711 07:14:57 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:35.711 07:14:57 -- setup/hugepages.sh@202 -- # setup output 00:05:35.711 07:14:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.711 07:14:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.285 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:36.285 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:36.285 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:36.285 07:14:58 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:36.285 07:14:58 -- setup/hugepages.sh@89 -- # local node 00:05:36.285 07:14:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:36.285 07:14:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:36.285 07:14:58 -- setup/hugepages.sh@92 -- # local surp 00:05:36.285 07:14:58 -- setup/hugepages.sh@93 -- # local resv 00:05:36.285 07:14:58 -- setup/hugepages.sh@94 -- # local anon 00:05:36.285 07:14:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:36.285 07:14:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:36.286 07:14:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:36.286 07:14:58 -- setup/common.sh@18 -- # local node= 00:05:36.286 07:14:58 -- setup/common.sh@19 -- # local var val 00:05:36.286 07:14:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.286 07:14:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.286 07:14:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.286 07:14:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.286 07:14:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.286 07:14:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6665348 kB' 'MemAvailable: 9457184 kB' 'Buffers: 2684 kB' 'Cached: 2995312 kB' 'SwapCached: 0 kB' 'Active: 454104 kB' 'Inactive: 2659648 kB' 'Active(anon): 126244 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117608 kB' 'Mapped: 50324 kB' 'Shmem: 10488 kB' 'KReclaimable: 82548 kB' 'Slab: 184264 kB' 'SReclaimable: 82548 kB' 'SUnreclaim: 101716 kB' 'KernelStack: 6692 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55400 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.286 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.286 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.287 07:14:58 -- setup/common.sh@33 -- # echo 0 00:05:36.287 07:14:58 -- setup/common.sh@33 -- # return 0 00:05:36.287 07:14:58 -- setup/hugepages.sh@97 -- # anon=0 00:05:36.287 07:14:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:36.287 07:14:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.287 07:14:58 -- setup/common.sh@18 -- # local node= 00:05:36.287 07:14:58 -- setup/common.sh@19 -- # local var val 00:05:36.287 07:14:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.287 07:14:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.287 07:14:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.287 07:14:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.287 07:14:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.287 07:14:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6665848 kB' 'MemAvailable: 9457684 kB' 'Buffers: 2684 kB' 'Cached: 2995312 kB' 'SwapCached: 0 kB' 'Active: 453556 kB' 'Inactive: 2659648 kB' 'Active(anon): 125696 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117044 kB' 'Mapped: 50124 kB' 'Shmem: 10488 kB' 'KReclaimable: 82548 kB' 'Slab: 184256 kB' 'SReclaimable: 82548 kB' 'SUnreclaim: 101708 kB' 'KernelStack: 6624 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.287 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.287 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.288 07:14:58 -- setup/common.sh@33 -- # echo 0 00:05:36.288 07:14:58 -- setup/common.sh@33 -- # return 0 00:05:36.288 07:14:58 -- setup/hugepages.sh@99 -- # surp=0 00:05:36.288 07:14:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:36.288 07:14:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:36.288 07:14:58 -- setup/common.sh@18 -- # local node= 00:05:36.288 07:14:58 -- setup/common.sh@19 -- # local var val 00:05:36.288 07:14:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.288 07:14:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.288 07:14:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.288 07:14:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.288 07:14:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.288 07:14:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6666112 kB' 'MemAvailable: 9457948 kB' 'Buffers: 2684 kB' 'Cached: 2995312 kB' 'SwapCached: 0 kB' 'Active: 453540 kB' 'Inactive: 2659648 kB' 'Active(anon): 125680 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117076 kB' 'Mapped: 50124 kB' 'Shmem: 10488 kB' 'KReclaimable: 82548 kB' 'Slab: 184256 kB' 'SReclaimable: 82548 kB' 'SUnreclaim: 101708 kB' 'KernelStack: 6640 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55352 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.288 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.288 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.289 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.289 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.290 07:14:58 -- setup/common.sh@33 -- # echo 0 00:05:36.290 07:14:58 -- setup/common.sh@33 -- # return 0 00:05:36.290 07:14:58 -- setup/hugepages.sh@100 -- # resv=0 00:05:36.290 07:14:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:36.290 nr_hugepages=1024 00:05:36.290 07:14:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:36.290 resv_hugepages=0 00:05:36.290 07:14:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:36.290 surplus_hugepages=0 00:05:36.290 07:14:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:36.290 anon_hugepages=0 00:05:36.290 07:14:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.290 07:14:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:36.290 07:14:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:36.290 07:14:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:36.290 07:14:58 -- setup/common.sh@18 -- # local node= 00:05:36.290 07:14:58 -- setup/common.sh@19 -- # local var val 00:05:36.290 07:14:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.290 07:14:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.290 07:14:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.290 07:14:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.290 07:14:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.290 07:14:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6666256 kB' 'MemAvailable: 9458092 kB' 'Buffers: 2684 kB' 'Cached: 2995312 kB' 'SwapCached: 0 kB' 'Active: 453480 kB' 'Inactive: 2659648 kB' 'Active(anon): 125620 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116964 kB' 'Mapped: 50124 kB' 'Shmem: 10488 kB' 'KReclaimable: 82548 kB' 'Slab: 184256 kB' 'SReclaimable: 82548 kB' 'SUnreclaim: 101708 kB' 'KernelStack: 6624 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459588 kB' 'Committed_AS: 305084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55336 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.290 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.290 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.291 07:14:58 -- setup/common.sh@33 -- # echo 1024 00:05:36.291 07:14:58 -- setup/common.sh@33 -- # return 0 00:05:36.291 07:14:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.291 07:14:58 -- setup/hugepages.sh@112 -- # get_nodes 00:05:36.291 07:14:58 -- setup/hugepages.sh@27 -- # local node 00:05:36.291 07:14:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.291 07:14:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:36.291 07:14:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:36.291 07:14:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:36.291 07:14:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:36.291 07:14:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:36.291 07:14:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:36.291 07:14:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.291 07:14:58 -- setup/common.sh@18 -- # local node=0 00:05:36.291 07:14:58 -- setup/common.sh@19 -- # local var val 00:05:36.291 07:14:58 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.291 07:14:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.291 07:14:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:36.291 07:14:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:36.291 07:14:58 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.291 07:14:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239120 kB' 'MemFree: 6666588 kB' 'MemUsed: 5572532 kB' 'SwapCached: 0 kB' 'Active: 453492 kB' 'Inactive: 2659648 kB' 'Active(anon): 125632 kB' 'Inactive(anon): 0 kB' 'Active(file): 327860 kB' 'Inactive(file): 2659648 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2997996 kB' 'Mapped: 50124 kB' 'AnonPages: 116980 kB' 'Shmem: 10488 kB' 'KernelStack: 6624 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82548 kB' 'Slab: 184256 kB' 'SReclaimable: 82548 kB' 'SUnreclaim: 101708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.291 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.291 07:14:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # continue 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.292 07:14:58 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.292 07:14:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.292 07:14:58 -- setup/common.sh@33 -- # echo 0 00:05:36.292 07:14:58 -- setup/common.sh@33 -- # return 0 00:05:36.292 07:14:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:36.292 07:14:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:36.292 07:14:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:36.292 07:14:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:36.292 07:14:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:36.292 node0=1024 expecting 1024 00:05:36.292 07:14:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:36.552 ************************************ 00:05:36.552 END TEST no_shrink_alloc 00:05:36.552 ************************************ 00:05:36.552 00:05:36.552 real 0m1.192s 00:05:36.552 user 0m0.562s 00:05:36.552 sys 0m0.619s 00:05:36.552 07:14:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.552 07:14:58 -- common/autotest_common.sh@10 -- # set +x 00:05:36.552 07:14:58 -- setup/hugepages.sh@217 -- # clear_hp 00:05:36.552 07:14:58 -- setup/hugepages.sh@37 -- # local node hp 00:05:36.552 07:14:58 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:36.552 07:14:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:36.552 07:14:58 -- setup/hugepages.sh@41 -- # echo 0 00:05:36.552 07:14:58 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:36.552 07:14:58 -- setup/hugepages.sh@41 -- # echo 0 00:05:36.552 07:14:58 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:36.552 07:14:58 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:36.552 ************************************ 00:05:36.552 END TEST hugepages 00:05:36.552 ************************************ 00:05:36.552 00:05:36.552 real 0m5.161s 00:05:36.552 user 0m2.382s 00:05:36.552 sys 0m2.663s 00:05:36.552 07:14:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.552 07:14:58 -- common/autotest_common.sh@10 -- # set +x 00:05:36.552 07:14:58 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:36.552 07:14:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.552 07:14:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.552 07:14:58 -- common/autotest_common.sh@10 -- # set +x 00:05:36.552 ************************************ 00:05:36.552 START TEST driver 00:05:36.552 ************************************ 00:05:36.552 07:14:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:36.552 * Looking for test storage... 00:05:36.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:36.552 07:14:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:36.552 07:14:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:36.552 07:14:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:36.552 07:14:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:36.552 07:14:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:36.552 07:14:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:36.552 07:14:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:36.552 07:14:58 -- scripts/common.sh@335 -- # IFS=.-: 00:05:36.552 07:14:58 -- scripts/common.sh@335 -- # read -ra ver1 00:05:36.552 07:14:58 -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.552 07:14:58 -- scripts/common.sh@336 -- # read -ra ver2 00:05:36.552 07:14:58 -- scripts/common.sh@337 -- # local 'op=<' 00:05:36.552 07:14:58 -- scripts/common.sh@339 -- # ver1_l=2 00:05:36.552 07:14:58 -- scripts/common.sh@340 -- # ver2_l=1 00:05:36.552 07:14:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:36.552 07:14:58 -- scripts/common.sh@343 -- # case "$op" in 00:05:36.552 07:14:58 -- scripts/common.sh@344 -- # : 1 00:05:36.552 07:14:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:36.552 07:14:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.552 07:14:58 -- scripts/common.sh@364 -- # decimal 1 00:05:36.812 07:14:58 -- scripts/common.sh@352 -- # local d=1 00:05:36.812 07:14:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.812 07:14:58 -- scripts/common.sh@354 -- # echo 1 00:05:36.812 07:14:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:36.812 07:14:58 -- scripts/common.sh@365 -- # decimal 2 00:05:36.812 07:14:58 -- scripts/common.sh@352 -- # local d=2 00:05:36.812 07:14:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.812 07:14:58 -- scripts/common.sh@354 -- # echo 2 00:05:36.812 07:14:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:36.812 07:14:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:36.812 07:14:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:36.812 07:14:58 -- scripts/common.sh@367 -- # return 0 00:05:36.812 07:14:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.812 07:14:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:36.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.812 --rc genhtml_branch_coverage=1 00:05:36.812 --rc genhtml_function_coverage=1 00:05:36.812 --rc genhtml_legend=1 00:05:36.812 --rc geninfo_all_blocks=1 00:05:36.812 --rc geninfo_unexecuted_blocks=1 00:05:36.812 00:05:36.812 ' 00:05:36.812 07:14:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:36.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.812 --rc genhtml_branch_coverage=1 00:05:36.812 --rc genhtml_function_coverage=1 00:05:36.812 --rc genhtml_legend=1 00:05:36.812 --rc geninfo_all_blocks=1 00:05:36.812 --rc geninfo_unexecuted_blocks=1 00:05:36.812 00:05:36.812 ' 00:05:36.812 07:14:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:36.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.812 --rc genhtml_branch_coverage=1 00:05:36.812 --rc genhtml_function_coverage=1 00:05:36.812 --rc genhtml_legend=1 00:05:36.812 --rc geninfo_all_blocks=1 00:05:36.812 --rc geninfo_unexecuted_blocks=1 00:05:36.812 00:05:36.812 ' 00:05:36.812 07:14:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:36.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.812 --rc genhtml_branch_coverage=1 00:05:36.812 --rc genhtml_function_coverage=1 00:05:36.812 --rc genhtml_legend=1 00:05:36.812 --rc geninfo_all_blocks=1 00:05:36.812 --rc geninfo_unexecuted_blocks=1 00:05:36.812 00:05:36.812 ' 00:05:36.812 07:14:58 -- setup/driver.sh@68 -- # setup reset 00:05:36.812 07:14:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:36.812 07:14:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.381 07:14:59 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:37.381 07:14:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.381 07:14:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.381 07:14:59 -- common/autotest_common.sh@10 -- # set +x 00:05:37.381 ************************************ 00:05:37.381 START TEST guess_driver 00:05:37.381 ************************************ 00:05:37.381 07:14:59 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:37.381 07:14:59 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:37.381 07:14:59 -- setup/driver.sh@47 -- # local fail=0 00:05:37.381 07:14:59 -- setup/driver.sh@49 -- # pick_driver 00:05:37.381 07:14:59 -- setup/driver.sh@36 -- # vfio 00:05:37.381 07:14:59 -- setup/driver.sh@21 -- # local iommu_grups 00:05:37.381 07:14:59 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:37.381 07:14:59 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:37.381 07:14:59 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:37.381 07:14:59 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:37.381 07:14:59 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:37.381 07:14:59 -- setup/driver.sh@32 -- # return 1 00:05:37.381 07:14:59 -- setup/driver.sh@38 -- # uio 00:05:37.381 07:14:59 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:37.381 07:14:59 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:37.381 07:14:59 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:37.381 07:14:59 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:37.381 07:14:59 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:37.381 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:37.381 07:14:59 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:37.381 Looking for driver=uio_pci_generic 00:05:37.381 07:14:59 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:37.381 07:14:59 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:37.381 07:14:59 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:37.381 07:14:59 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:37.381 07:14:59 -- setup/driver.sh@45 -- # setup output config 00:05:37.381 07:14:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.381 07:14:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:37.948 07:15:00 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:37.949 07:15:00 -- setup/driver.sh@58 -- # continue 00:05:37.949 07:15:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:37.949 07:15:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:37.949 07:15:00 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:37.949 07:15:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:38.208 07:15:00 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:38.208 07:15:00 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:38.208 07:15:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:38.208 07:15:00 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:38.208 07:15:00 -- setup/driver.sh@65 -- # setup reset 00:05:38.208 07:15:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:38.208 07:15:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:38.776 ************************************ 00:05:38.776 END TEST guess_driver 00:05:38.776 ************************************ 00:05:38.776 00:05:38.776 real 0m1.458s 00:05:38.776 user 0m0.538s 00:05:38.776 sys 0m0.905s 00:05:38.776 07:15:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.776 07:15:00 -- common/autotest_common.sh@10 -- # set +x 00:05:38.776 ************************************ 00:05:38.776 END TEST driver 00:05:38.776 ************************************ 00:05:38.776 00:05:38.776 real 0m2.241s 00:05:38.776 user 0m0.876s 00:05:38.776 sys 0m1.413s 00:05:38.776 07:15:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:38.776 07:15:00 -- common/autotest_common.sh@10 -- # set +x 00:05:38.776 07:15:00 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:38.776 07:15:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.776 07:15:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.776 07:15:00 -- common/autotest_common.sh@10 -- # set +x 00:05:38.776 ************************************ 00:05:38.776 START TEST devices 00:05:38.776 ************************************ 00:05:38.776 07:15:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:38.776 * Looking for test storage... 00:05:38.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:38.776 07:15:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:38.776 07:15:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:38.776 07:15:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:39.034 07:15:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:39.034 07:15:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:39.034 07:15:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:39.034 07:15:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:39.034 07:15:01 -- scripts/common.sh@335 -- # IFS=.-: 00:05:39.034 07:15:01 -- scripts/common.sh@335 -- # read -ra ver1 00:05:39.034 07:15:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.035 07:15:01 -- scripts/common.sh@336 -- # read -ra ver2 00:05:39.035 07:15:01 -- scripts/common.sh@337 -- # local 'op=<' 00:05:39.035 07:15:01 -- scripts/common.sh@339 -- # ver1_l=2 00:05:39.035 07:15:01 -- scripts/common.sh@340 -- # ver2_l=1 00:05:39.035 07:15:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:39.035 07:15:01 -- scripts/common.sh@343 -- # case "$op" in 00:05:39.035 07:15:01 -- scripts/common.sh@344 -- # : 1 00:05:39.035 07:15:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:39.035 07:15:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.035 07:15:01 -- scripts/common.sh@364 -- # decimal 1 00:05:39.035 07:15:01 -- scripts/common.sh@352 -- # local d=1 00:05:39.035 07:15:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.035 07:15:01 -- scripts/common.sh@354 -- # echo 1 00:05:39.035 07:15:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:39.035 07:15:01 -- scripts/common.sh@365 -- # decimal 2 00:05:39.035 07:15:01 -- scripts/common.sh@352 -- # local d=2 00:05:39.035 07:15:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.035 07:15:01 -- scripts/common.sh@354 -- # echo 2 00:05:39.035 07:15:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:39.035 07:15:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:39.035 07:15:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:39.035 07:15:01 -- scripts/common.sh@367 -- # return 0 00:05:39.035 07:15:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.035 07:15:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:39.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.035 --rc genhtml_branch_coverage=1 00:05:39.035 --rc genhtml_function_coverage=1 00:05:39.035 --rc genhtml_legend=1 00:05:39.035 --rc geninfo_all_blocks=1 00:05:39.035 --rc geninfo_unexecuted_blocks=1 00:05:39.035 00:05:39.035 ' 00:05:39.035 07:15:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:39.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.035 --rc genhtml_branch_coverage=1 00:05:39.035 --rc genhtml_function_coverage=1 00:05:39.035 --rc genhtml_legend=1 00:05:39.035 --rc geninfo_all_blocks=1 00:05:39.035 --rc geninfo_unexecuted_blocks=1 00:05:39.035 00:05:39.035 ' 00:05:39.035 07:15:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:39.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.035 --rc genhtml_branch_coverage=1 00:05:39.035 --rc genhtml_function_coverage=1 00:05:39.035 --rc genhtml_legend=1 00:05:39.035 --rc geninfo_all_blocks=1 00:05:39.035 --rc geninfo_unexecuted_blocks=1 00:05:39.035 00:05:39.035 ' 00:05:39.035 07:15:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:39.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.035 --rc genhtml_branch_coverage=1 00:05:39.035 --rc genhtml_function_coverage=1 00:05:39.035 --rc genhtml_legend=1 00:05:39.035 --rc geninfo_all_blocks=1 00:05:39.035 --rc geninfo_unexecuted_blocks=1 00:05:39.035 00:05:39.035 ' 00:05:39.035 07:15:01 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:39.035 07:15:01 -- setup/devices.sh@192 -- # setup reset 00:05:39.035 07:15:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:39.035 07:15:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.971 07:15:01 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:39.971 07:15:01 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:39.971 07:15:01 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:39.971 07:15:01 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:39.971 07:15:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:39.971 07:15:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:39.971 07:15:01 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:39.971 07:15:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:39.971 07:15:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:39.971 07:15:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:39.971 07:15:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:39.971 07:15:01 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:39.971 07:15:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:39.971 07:15:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:39.971 07:15:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:39.971 07:15:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:39.971 07:15:01 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:39.971 07:15:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:39.971 07:15:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:39.971 07:15:01 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:39.971 07:15:01 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:39.971 07:15:01 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:39.971 07:15:01 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:39.971 07:15:01 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:39.971 07:15:01 -- setup/devices.sh@196 -- # blocks=() 00:05:39.971 07:15:01 -- setup/devices.sh@196 -- # declare -a blocks 00:05:39.971 07:15:01 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:39.971 07:15:01 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:39.971 07:15:01 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:39.971 07:15:01 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.971 07:15:01 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:39.971 07:15:01 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:39.971 07:15:01 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:39.971 07:15:01 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:39.971 07:15:01 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:39.971 07:15:01 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:39.971 07:15:01 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:39.971 No valid GPT data, bailing 00:05:39.971 07:15:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:39.971 07:15:02 -- scripts/common.sh@393 -- # pt= 00:05:39.971 07:15:02 -- scripts/common.sh@394 -- # return 1 00:05:39.971 07:15:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:39.971 07:15:02 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:39.971 07:15:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:39.971 07:15:02 -- setup/common.sh@80 -- # echo 5368709120 00:05:39.971 07:15:02 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:39.971 07:15:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:39.971 07:15:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:39.971 07:15:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.971 07:15:02 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:39.971 07:15:02 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:39.971 07:15:02 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:39.971 07:15:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:39.971 07:15:02 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:39.971 07:15:02 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:39.971 07:15:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:39.971 No valid GPT data, bailing 00:05:39.971 07:15:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:39.971 07:15:02 -- scripts/common.sh@393 -- # pt= 00:05:39.971 07:15:02 -- scripts/common.sh@394 -- # return 1 00:05:39.971 07:15:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:39.971 07:15:02 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:39.971 07:15:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:39.971 07:15:02 -- setup/common.sh@80 -- # echo 4294967296 00:05:39.971 07:15:02 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:39.971 07:15:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:39.971 07:15:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:39.971 07:15:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.971 07:15:02 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:39.971 07:15:02 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:39.971 07:15:02 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:39.971 07:15:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:39.971 07:15:02 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:39.971 07:15:02 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:39.971 07:15:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:39.971 No valid GPT data, bailing 00:05:39.971 07:15:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:39.971 07:15:02 -- scripts/common.sh@393 -- # pt= 00:05:39.971 07:15:02 -- scripts/common.sh@394 -- # return 1 00:05:39.971 07:15:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:39.971 07:15:02 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:39.971 07:15:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:39.971 07:15:02 -- setup/common.sh@80 -- # echo 4294967296 00:05:39.971 07:15:02 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:39.971 07:15:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:39.971 07:15:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:39.971 07:15:02 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:39.971 07:15:02 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:39.971 07:15:02 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:39.971 07:15:02 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:39.971 07:15:02 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:39.971 07:15:02 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:39.972 07:15:02 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:39.972 07:15:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:40.230 No valid GPT data, bailing 00:05:40.230 07:15:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:40.230 07:15:02 -- scripts/common.sh@393 -- # pt= 00:05:40.231 07:15:02 -- scripts/common.sh@394 -- # return 1 00:05:40.231 07:15:02 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:40.231 07:15:02 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:40.231 07:15:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:40.231 07:15:02 -- setup/common.sh@80 -- # echo 4294967296 00:05:40.231 07:15:02 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:40.231 07:15:02 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:40.231 07:15:02 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:40.231 07:15:02 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:40.231 07:15:02 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:40.231 07:15:02 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:40.231 07:15:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.231 07:15:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.231 07:15:02 -- common/autotest_common.sh@10 -- # set +x 00:05:40.231 ************************************ 00:05:40.231 START TEST nvme_mount 00:05:40.231 ************************************ 00:05:40.231 07:15:02 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:40.231 07:15:02 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:40.231 07:15:02 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:40.231 07:15:02 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:40.231 07:15:02 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:40.231 07:15:02 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:40.231 07:15:02 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:40.231 07:15:02 -- setup/common.sh@40 -- # local part_no=1 00:05:40.231 07:15:02 -- setup/common.sh@41 -- # local size=1073741824 00:05:40.231 07:15:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:40.231 07:15:02 -- setup/common.sh@44 -- # parts=() 00:05:40.231 07:15:02 -- setup/common.sh@44 -- # local parts 00:05:40.231 07:15:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:40.231 07:15:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:40.231 07:15:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:40.231 07:15:02 -- setup/common.sh@46 -- # (( part++ )) 00:05:40.231 07:15:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:40.231 07:15:02 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:40.231 07:15:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:40.231 07:15:02 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:41.169 Creating new GPT entries in memory. 00:05:41.169 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:41.169 other utilities. 00:05:41.169 07:15:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:41.169 07:15:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.169 07:15:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:41.169 07:15:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:41.169 07:15:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:42.105 Creating new GPT entries in memory. 00:05:42.105 The operation has completed successfully. 00:05:42.105 07:15:04 -- setup/common.sh@57 -- # (( part++ )) 00:05:42.105 07:15:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:42.105 07:15:04 -- setup/common.sh@62 -- # wait 64152 00:05:42.364 07:15:04 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.364 07:15:04 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:42.364 07:15:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.364 07:15:04 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:42.364 07:15:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:42.364 07:15:04 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.364 07:15:04 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.364 07:15:04 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:42.364 07:15:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:42.364 07:15:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.364 07:15:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.365 07:15:04 -- setup/devices.sh@53 -- # local found=0 00:05:42.365 07:15:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.365 07:15:04 -- setup/devices.sh@56 -- # : 00:05:42.365 07:15:04 -- setup/devices.sh@59 -- # local pci status 00:05:42.365 07:15:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.365 07:15:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:42.365 07:15:04 -- setup/devices.sh@47 -- # setup output config 00:05:42.365 07:15:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.365 07:15:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.365 07:15:04 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.365 07:15:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:42.365 07:15:04 -- setup/devices.sh@63 -- # found=1 00:05:42.365 07:15:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.365 07:15:04 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.365 07:15:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.932 07:15:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.932 07:15:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.932 07:15:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.932 07:15:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.932 07:15:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.932 07:15:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:42.932 07:15:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.932 07:15:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.932 07:15:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:42.932 07:15:05 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:42.932 07:15:05 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.932 07:15:05 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:42.932 07:15:05 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:42.932 07:15:05 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:42.932 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:42.932 07:15:05 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:42.932 07:15:05 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:43.190 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.190 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:43.190 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:43.190 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:43.190 07:15:05 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:43.190 07:15:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:43.190 07:15:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.190 07:15:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:43.190 07:15:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:43.190 07:15:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.448 07:15:05 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.448 07:15:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:43.448 07:15:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:43.448 07:15:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:43.448 07:15:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:43.448 07:15:05 -- setup/devices.sh@53 -- # local found=0 00:05:43.448 07:15:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:43.448 07:15:05 -- setup/devices.sh@56 -- # : 00:05:43.448 07:15:05 -- setup/devices.sh@59 -- # local pci status 00:05:43.448 07:15:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.448 07:15:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:43.448 07:15:05 -- setup/devices.sh@47 -- # setup output config 00:05:43.448 07:15:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.448 07:15:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:43.448 07:15:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:43.448 07:15:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:43.448 07:15:05 -- setup/devices.sh@63 -- # found=1 00:05:43.448 07:15:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.448 07:15:05 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:43.448 07:15:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.016 07:15:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.016 07:15:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.016 07:15:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.016 07:15:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.016 07:15:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.016 07:15:06 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:44.016 07:15:06 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.016 07:15:06 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:44.016 07:15:06 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:44.016 07:15:06 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.016 07:15:06 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:44.016 07:15:06 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:44.016 07:15:06 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:44.016 07:15:06 -- setup/devices.sh@50 -- # local mount_point= 00:05:44.016 07:15:06 -- setup/devices.sh@51 -- # local test_file= 00:05:44.016 07:15:06 -- setup/devices.sh@53 -- # local found=0 00:05:44.016 07:15:06 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:44.016 07:15:06 -- setup/devices.sh@59 -- # local pci status 00:05:44.016 07:15:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.016 07:15:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:44.016 07:15:06 -- setup/devices.sh@47 -- # setup output config 00:05:44.016 07:15:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.016 07:15:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:44.275 07:15:06 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.275 07:15:06 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:44.275 07:15:06 -- setup/devices.sh@63 -- # found=1 00:05:44.275 07:15:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.275 07:15:06 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.275 07:15:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.535 07:15:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.535 07:15:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.794 07:15:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:44.794 07:15:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.794 07:15:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.794 07:15:06 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:44.794 07:15:06 -- setup/devices.sh@68 -- # return 0 00:05:44.794 07:15:06 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:44.794 07:15:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.794 07:15:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:44.794 07:15:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:44.794 07:15:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:44.794 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:44.794 00:05:44.794 real 0m4.624s 00:05:44.794 user 0m1.065s 00:05:44.794 sys 0m1.235s 00:05:44.794 07:15:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.794 ************************************ 00:05:44.794 END TEST nvme_mount 00:05:44.794 ************************************ 00:05:44.794 07:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:44.794 07:15:06 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:44.794 07:15:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.794 07:15:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.794 07:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:44.794 ************************************ 00:05:44.794 START TEST dm_mount 00:05:44.794 ************************************ 00:05:44.794 07:15:06 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:44.794 07:15:06 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:44.794 07:15:06 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:44.794 07:15:06 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:44.794 07:15:06 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:44.794 07:15:06 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:44.794 07:15:06 -- setup/common.sh@40 -- # local part_no=2 00:05:44.794 07:15:06 -- setup/common.sh@41 -- # local size=1073741824 00:05:44.794 07:15:06 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:44.794 07:15:06 -- setup/common.sh@44 -- # parts=() 00:05:44.794 07:15:06 -- setup/common.sh@44 -- # local parts 00:05:44.794 07:15:06 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:44.794 07:15:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.794 07:15:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:44.794 07:15:06 -- setup/common.sh@46 -- # (( part++ )) 00:05:44.794 07:15:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.794 07:15:06 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:44.794 07:15:06 -- setup/common.sh@46 -- # (( part++ )) 00:05:44.794 07:15:06 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.794 07:15:06 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:44.794 07:15:06 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:44.794 07:15:06 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:45.731 Creating new GPT entries in memory. 00:05:45.731 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:45.731 other utilities. 00:05:45.731 07:15:07 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:45.731 07:15:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:45.731 07:15:07 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:45.732 07:15:07 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:45.732 07:15:07 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:47.109 Creating new GPT entries in memory. 00:05:47.109 The operation has completed successfully. 00:05:47.109 07:15:09 -- setup/common.sh@57 -- # (( part++ )) 00:05:47.109 07:15:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.109 07:15:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:47.109 07:15:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:47.109 07:15:09 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:48.046 The operation has completed successfully. 00:05:48.046 07:15:10 -- setup/common.sh@57 -- # (( part++ )) 00:05:48.046 07:15:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.046 07:15:10 -- setup/common.sh@62 -- # wait 64612 00:05:48.046 07:15:10 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:48.046 07:15:10 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.046 07:15:10 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.046 07:15:10 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:48.046 07:15:10 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:48.046 07:15:10 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.046 07:15:10 -- setup/devices.sh@161 -- # break 00:05:48.046 07:15:10 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.046 07:15:10 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:48.046 07:15:10 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:48.046 07:15:10 -- setup/devices.sh@166 -- # dm=dm-0 00:05:48.046 07:15:10 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:48.046 07:15:10 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:48.046 07:15:10 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.046 07:15:10 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:48.046 07:15:10 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.046 07:15:10 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:48.046 07:15:10 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:48.046 07:15:10 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.046 07:15:10 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.046 07:15:10 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:48.046 07:15:10 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:48.046 07:15:10 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.046 07:15:10 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.046 07:15:10 -- setup/devices.sh@53 -- # local found=0 00:05:48.046 07:15:10 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:48.046 07:15:10 -- setup/devices.sh@56 -- # : 00:05:48.046 07:15:10 -- setup/devices.sh@59 -- # local pci status 00:05:48.046 07:15:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.046 07:15:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:48.046 07:15:10 -- setup/devices.sh@47 -- # setup output config 00:05:48.046 07:15:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.046 07:15:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:48.305 07:15:10 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.305 07:15:10 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:48.305 07:15:10 -- setup/devices.sh@63 -- # found=1 00:05:48.305 07:15:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.305 07:15:10 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.305 07:15:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.564 07:15:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.564 07:15:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.564 07:15:10 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.564 07:15:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.823 07:15:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:48.823 07:15:10 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:48.823 07:15:10 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.823 07:15:10 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:48.823 07:15:10 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:48.823 07:15:10 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:48.823 07:15:10 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:48.823 07:15:10 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:48.823 07:15:10 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:48.823 07:15:10 -- setup/devices.sh@50 -- # local mount_point= 00:05:48.823 07:15:10 -- setup/devices.sh@51 -- # local test_file= 00:05:48.823 07:15:10 -- setup/devices.sh@53 -- # local found=0 00:05:48.823 07:15:10 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:48.823 07:15:10 -- setup/devices.sh@59 -- # local pci status 00:05:48.823 07:15:10 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.823 07:15:10 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:48.823 07:15:10 -- setup/devices.sh@47 -- # setup output config 00:05:48.823 07:15:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:48.823 07:15:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:48.823 07:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.823 07:15:11 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:48.823 07:15:11 -- setup/devices.sh@63 -- # found=1 00:05:48.823 07:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.823 07:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:48.823 07:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.398 07:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:49.398 07:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.398 07:15:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:49.398 07:15:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.398 07:15:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:49.398 07:15:11 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:49.398 07:15:11 -- setup/devices.sh@68 -- # return 0 00:05:49.398 07:15:11 -- setup/devices.sh@187 -- # cleanup_dm 00:05:49.398 07:15:11 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.398 07:15:11 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:49.398 07:15:11 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:49.398 07:15:11 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.398 07:15:11 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:49.398 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:49.398 07:15:11 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:49.398 07:15:11 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:49.398 00:05:49.398 real 0m4.641s 00:05:49.398 user 0m0.700s 00:05:49.398 sys 0m0.864s 00:05:49.398 07:15:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.398 ************************************ 00:05:49.398 END TEST dm_mount 00:05:49.399 ************************************ 00:05:49.399 07:15:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.399 07:15:11 -- setup/devices.sh@1 -- # cleanup 00:05:49.399 07:15:11 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:49.399 07:15:11 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.399 07:15:11 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.399 07:15:11 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:49.399 07:15:11 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.399 07:15:11 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:49.657 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:49.657 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:49.657 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:49.657 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:49.657 07:15:11 -- setup/devices.sh@12 -- # cleanup_dm 00:05:49.657 07:15:11 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:49.916 07:15:11 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:49.916 07:15:11 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:49.916 07:15:11 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:49.916 07:15:11 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:49.916 07:15:11 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:49.916 ************************************ 00:05:49.916 END TEST devices 00:05:49.916 ************************************ 00:05:49.916 00:05:49.916 real 0m10.998s 00:05:49.916 user 0m2.537s 00:05:49.916 sys 0m2.758s 00:05:49.917 07:15:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.917 07:15:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.917 00:05:49.917 real 0m23.233s 00:05:49.917 user 0m7.964s 00:05:49.917 sys 0m9.481s 00:05:49.917 07:15:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:49.917 07:15:11 -- common/autotest_common.sh@10 -- # set +x 00:05:49.917 ************************************ 00:05:49.917 END TEST setup.sh 00:05:49.917 ************************************ 00:05:49.917 07:15:12 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:49.917 Hugepages 00:05:49.917 node hugesize free / total 00:05:49.917 node0 1048576kB 0 / 0 00:05:49.917 node0 2048kB 2048 / 2048 00:05:49.917 00:05:49.917 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:50.176 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:50.176 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:50.176 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:50.176 07:15:12 -- spdk/autotest.sh@128 -- # uname -s 00:05:50.176 07:15:12 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:50.176 07:15:12 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:50.176 07:15:12 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:51.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:51.113 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:51.113 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:51.113 07:15:13 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:52.048 07:15:14 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:52.048 07:15:14 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:52.048 07:15:14 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:52.048 07:15:14 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:52.048 07:15:14 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:52.048 07:15:14 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:52.048 07:15:14 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:52.048 07:15:14 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:52.048 07:15:14 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:52.307 07:15:14 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:52.307 07:15:14 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:52.307 07:15:14 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:52.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:52.566 Waiting for block devices as requested 00:05:52.566 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:52.826 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:52.826 07:15:14 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:52.826 07:15:14 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:52.826 07:15:14 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:52.826 07:15:14 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:52.826 07:15:14 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:52.826 07:15:14 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:52.826 07:15:14 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:52.826 07:15:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:52.826 07:15:15 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:52.826 07:15:15 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:52.826 07:15:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:52.826 07:15:15 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:52.826 07:15:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:52.826 07:15:15 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:52.826 07:15:15 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:52.826 07:15:15 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:52.826 07:15:15 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:52.826 07:15:15 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:52.826 07:15:15 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:52.826 07:15:15 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:52.826 07:15:15 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:52.826 07:15:15 -- common/autotest_common.sh@1552 -- # continue 00:05:52.826 07:15:15 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:52.826 07:15:15 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:52.826 07:15:15 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:52.826 07:15:15 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:52.826 07:15:15 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:52.826 07:15:15 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:52.826 07:15:15 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:52.826 07:15:15 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:05:52.826 07:15:15 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:05:52.826 07:15:15 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:05:52.826 07:15:15 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:52.826 07:15:15 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:52.826 07:15:15 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:52.826 07:15:15 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:52.826 07:15:15 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:52.826 07:15:15 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:52.826 07:15:15 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:05:52.826 07:15:15 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:52.826 07:15:15 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:52.826 07:15:15 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:52.826 07:15:15 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:52.826 07:15:15 -- common/autotest_common.sh@1552 -- # continue 00:05:52.826 07:15:15 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:52.826 07:15:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.826 07:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:52.826 07:15:15 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:52.826 07:15:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:52.826 07:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:52.826 07:15:15 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:53.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:53.761 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.761 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:53.761 07:15:15 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:53.761 07:15:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.761 07:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:53.761 07:15:16 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:53.761 07:15:16 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:53.761 07:15:16 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:53.761 07:15:16 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:53.761 07:15:16 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:53.761 07:15:16 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:53.761 07:15:16 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:53.761 07:15:16 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:53.761 07:15:16 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:53.761 07:15:16 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:53.761 07:15:16 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:54.020 07:15:16 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:54.020 07:15:16 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:54.020 07:15:16 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:54.020 07:15:16 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:54.020 07:15:16 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:54.020 07:15:16 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:54.020 07:15:16 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:54.020 07:15:16 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:54.020 07:15:16 -- common/autotest_common.sh@1575 -- # device=0x0010 00:05:54.020 07:15:16 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:54.020 07:15:16 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:05:54.020 07:15:16 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:05:54.020 07:15:16 -- common/autotest_common.sh@1588 -- # return 0 00:05:54.020 07:15:16 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:54.020 07:15:16 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:54.020 07:15:16 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:54.020 07:15:16 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:54.020 07:15:16 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:54.020 07:15:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.020 07:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:54.020 07:15:16 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:54.020 07:15:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.020 07:15:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.020 07:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:54.020 ************************************ 00:05:54.020 START TEST env 00:05:54.020 ************************************ 00:05:54.020 07:15:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:54.020 * Looking for test storage... 00:05:54.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:54.020 07:15:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:54.020 07:15:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:54.020 07:15:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:54.020 07:15:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:54.020 07:15:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:54.020 07:15:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:54.020 07:15:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:54.020 07:15:16 -- scripts/common.sh@335 -- # IFS=.-: 00:05:54.020 07:15:16 -- scripts/common.sh@335 -- # read -ra ver1 00:05:54.020 07:15:16 -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.020 07:15:16 -- scripts/common.sh@336 -- # read -ra ver2 00:05:54.020 07:15:16 -- scripts/common.sh@337 -- # local 'op=<' 00:05:54.020 07:15:16 -- scripts/common.sh@339 -- # ver1_l=2 00:05:54.020 07:15:16 -- scripts/common.sh@340 -- # ver2_l=1 00:05:54.020 07:15:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:54.020 07:15:16 -- scripts/common.sh@343 -- # case "$op" in 00:05:54.020 07:15:16 -- scripts/common.sh@344 -- # : 1 00:05:54.020 07:15:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:54.020 07:15:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.020 07:15:16 -- scripts/common.sh@364 -- # decimal 1 00:05:54.020 07:15:16 -- scripts/common.sh@352 -- # local d=1 00:05:54.020 07:15:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.021 07:15:16 -- scripts/common.sh@354 -- # echo 1 00:05:54.021 07:15:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:54.280 07:15:16 -- scripts/common.sh@365 -- # decimal 2 00:05:54.280 07:15:16 -- scripts/common.sh@352 -- # local d=2 00:05:54.280 07:15:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.280 07:15:16 -- scripts/common.sh@354 -- # echo 2 00:05:54.280 07:15:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:54.280 07:15:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:54.280 07:15:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:54.280 07:15:16 -- scripts/common.sh@367 -- # return 0 00:05:54.280 07:15:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.280 07:15:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:54.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.280 --rc genhtml_branch_coverage=1 00:05:54.280 --rc genhtml_function_coverage=1 00:05:54.280 --rc genhtml_legend=1 00:05:54.280 --rc geninfo_all_blocks=1 00:05:54.280 --rc geninfo_unexecuted_blocks=1 00:05:54.280 00:05:54.280 ' 00:05:54.280 07:15:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:54.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.280 --rc genhtml_branch_coverage=1 00:05:54.280 --rc genhtml_function_coverage=1 00:05:54.280 --rc genhtml_legend=1 00:05:54.280 --rc geninfo_all_blocks=1 00:05:54.280 --rc geninfo_unexecuted_blocks=1 00:05:54.280 00:05:54.280 ' 00:05:54.280 07:15:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:54.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.280 --rc genhtml_branch_coverage=1 00:05:54.280 --rc genhtml_function_coverage=1 00:05:54.280 --rc genhtml_legend=1 00:05:54.280 --rc geninfo_all_blocks=1 00:05:54.280 --rc geninfo_unexecuted_blocks=1 00:05:54.280 00:05:54.280 ' 00:05:54.280 07:15:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:54.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.280 --rc genhtml_branch_coverage=1 00:05:54.280 --rc genhtml_function_coverage=1 00:05:54.280 --rc genhtml_legend=1 00:05:54.280 --rc geninfo_all_blocks=1 00:05:54.280 --rc geninfo_unexecuted_blocks=1 00:05:54.280 00:05:54.280 ' 00:05:54.280 07:15:16 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:54.280 07:15:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.280 07:15:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.280 07:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:54.280 ************************************ 00:05:54.280 START TEST env_memory 00:05:54.280 ************************************ 00:05:54.280 07:15:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:54.280 00:05:54.280 00:05:54.280 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.280 http://cunit.sourceforge.net/ 00:05:54.280 00:05:54.280 00:05:54.280 Suite: memory 00:05:54.280 Test: alloc and free memory map ...[2024-11-28 07:15:16.355397] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:54.280 passed 00:05:54.280 Test: mem map translation ...[2024-11-28 07:15:16.382726] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:54.280 [2024-11-28 07:15:16.383008] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:54.280 [2024-11-28 07:15:16.383276] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:54.280 [2024-11-28 07:15:16.383740] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:54.280 passed 00:05:54.280 Test: mem map registration ...[2024-11-28 07:15:16.436824] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:54.280 [2024-11-28 07:15:16.437109] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:54.280 passed 00:05:54.280 Test: mem map adjacent registrations ...passed 00:05:54.280 00:05:54.280 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.280 suites 1 1 n/a 0 0 00:05:54.280 tests 4 4 4 0 0 00:05:54.280 asserts 152 152 152 0 n/a 00:05:54.280 00:05:54.280 Elapsed time = 0.204 seconds 00:05:54.280 00:05:54.280 real 0m0.226s 00:05:54.280 user 0m0.203s 00:05:54.280 sys 0m0.015s 00:05:54.280 07:15:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.280 07:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:54.280 ************************************ 00:05:54.280 END TEST env_memory 00:05:54.280 ************************************ 00:05:54.540 07:15:16 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:54.540 07:15:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.540 07:15:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.540 07:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:54.540 ************************************ 00:05:54.540 START TEST env_vtophys 00:05:54.540 ************************************ 00:05:54.540 07:15:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:54.540 EAL: lib.eal log level changed from notice to debug 00:05:54.540 EAL: Detected lcore 0 as core 0 on socket 0 00:05:54.540 EAL: Detected lcore 1 as core 0 on socket 0 00:05:54.540 EAL: Detected lcore 2 as core 0 on socket 0 00:05:54.540 EAL: Detected lcore 3 as core 0 on socket 0 00:05:54.540 EAL: Detected lcore 4 as core 0 on socket 0 00:05:54.540 EAL: Detected lcore 5 as core 0 on socket 0 00:05:54.541 EAL: Detected lcore 6 as core 0 on socket 0 00:05:54.541 EAL: Detected lcore 7 as core 0 on socket 0 00:05:54.541 EAL: Detected lcore 8 as core 0 on socket 0 00:05:54.541 EAL: Detected lcore 9 as core 0 on socket 0 00:05:54.541 EAL: Maximum logical cores by configuration: 128 00:05:54.541 EAL: Detected CPU lcores: 10 00:05:54.541 EAL: Detected NUMA nodes: 1 00:05:54.541 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:54.541 EAL: Detected shared linkage of DPDK 00:05:54.541 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:54.541 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:54.541 EAL: Registered [vdev] bus. 00:05:54.541 EAL: bus.vdev log level changed from disabled to notice 00:05:54.541 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:54.541 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:54.541 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:54.541 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:54.541 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:54.541 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:54.541 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:54.541 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:54.541 EAL: No shared files mode enabled, IPC will be disabled 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Selected IOVA mode 'PA' 00:05:54.541 EAL: Probing VFIO support... 00:05:54.541 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:54.541 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:54.541 EAL: Ask a virtual area of 0x2e000 bytes 00:05:54.541 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:54.541 EAL: Setting up physically contiguous memory... 00:05:54.541 EAL: Setting maximum number of open files to 524288 00:05:54.541 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:54.541 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:54.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.541 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:54.541 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.541 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:54.541 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:54.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.541 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:54.541 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.541 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:54.541 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:54.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.541 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:54.541 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.541 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:54.541 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:54.541 EAL: Ask a virtual area of 0x61000 bytes 00:05:54.541 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:54.541 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:54.541 EAL: Ask a virtual area of 0x400000000 bytes 00:05:54.541 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:54.541 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:54.541 EAL: Hugepages will be freed exactly as allocated. 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: TSC frequency is ~2200000 KHz 00:05:54.541 EAL: Main lcore 0 is ready (tid=7f7ac001ca00;cpuset=[0]) 00:05:54.541 EAL: Trying to obtain current memory policy. 00:05:54.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.541 EAL: Restoring previous memory policy: 0 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was expanded by 2MB 00:05:54.541 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:54.541 EAL: Mem event callback 'spdk:(nil)' registered 00:05:54.541 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:54.541 00:05:54.541 00:05:54.541 CUnit - A unit testing framework for C - Version 2.1-3 00:05:54.541 http://cunit.sourceforge.net/ 00:05:54.541 00:05:54.541 00:05:54.541 Suite: components_suite 00:05:54.541 Test: vtophys_malloc_test ...passed 00:05:54.541 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:54.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.541 EAL: Restoring previous memory policy: 4 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was expanded by 4MB 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was shrunk by 4MB 00:05:54.541 EAL: Trying to obtain current memory policy. 00:05:54.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.541 EAL: Restoring previous memory policy: 4 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was expanded by 6MB 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was shrunk by 6MB 00:05:54.541 EAL: Trying to obtain current memory policy. 00:05:54.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.541 EAL: Restoring previous memory policy: 4 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was expanded by 10MB 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was shrunk by 10MB 00:05:54.541 EAL: Trying to obtain current memory policy. 00:05:54.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.541 EAL: Restoring previous memory policy: 4 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was expanded by 18MB 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was shrunk by 18MB 00:05:54.541 EAL: Trying to obtain current memory policy. 00:05:54.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.541 EAL: Restoring previous memory policy: 4 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was expanded by 34MB 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was shrunk by 34MB 00:05:54.541 EAL: Trying to obtain current memory policy. 00:05:54.541 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.541 EAL: Restoring previous memory policy: 4 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.541 EAL: request: mp_malloc_sync 00:05:54.541 EAL: No shared files mode enabled, IPC is disabled 00:05:54.541 EAL: Heap on socket 0 was expanded by 66MB 00:05:54.541 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.801 EAL: request: mp_malloc_sync 00:05:54.801 EAL: No shared files mode enabled, IPC is disabled 00:05:54.801 EAL: Heap on socket 0 was shrunk by 66MB 00:05:54.801 EAL: Trying to obtain current memory policy. 00:05:54.801 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.801 EAL: Restoring previous memory policy: 4 00:05:54.801 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.801 EAL: request: mp_malloc_sync 00:05:54.801 EAL: No shared files mode enabled, IPC is disabled 00:05:54.801 EAL: Heap on socket 0 was expanded by 130MB 00:05:54.801 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.801 EAL: request: mp_malloc_sync 00:05:54.801 EAL: No shared files mode enabled, IPC is disabled 00:05:54.801 EAL: Heap on socket 0 was shrunk by 130MB 00:05:54.801 EAL: Trying to obtain current memory policy. 00:05:54.801 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:54.801 EAL: Restoring previous memory policy: 4 00:05:54.801 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.801 EAL: request: mp_malloc_sync 00:05:54.801 EAL: No shared files mode enabled, IPC is disabled 00:05:54.801 EAL: Heap on socket 0 was expanded by 258MB 00:05:55.060 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.060 EAL: request: mp_malloc_sync 00:05:55.060 EAL: No shared files mode enabled, IPC is disabled 00:05:55.060 EAL: Heap on socket 0 was shrunk by 258MB 00:05:55.060 EAL: Trying to obtain current memory policy. 00:05:55.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.319 EAL: Restoring previous memory policy: 4 00:05:55.319 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.319 EAL: request: mp_malloc_sync 00:05:55.319 EAL: No shared files mode enabled, IPC is disabled 00:05:55.319 EAL: Heap on socket 0 was expanded by 514MB 00:05:55.319 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.579 EAL: request: mp_malloc_sync 00:05:55.579 EAL: No shared files mode enabled, IPC is disabled 00:05:55.579 EAL: Heap on socket 0 was shrunk by 514MB 00:05:55.579 EAL: Trying to obtain current memory policy. 00:05:55.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:55.837 EAL: Restoring previous memory policy: 4 00:05:55.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:55.837 EAL: request: mp_malloc_sync 00:05:55.837 EAL: No shared files mode enabled, IPC is disabled 00:05:55.837 EAL: Heap on socket 0 was expanded by 1026MB 00:05:56.096 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.355 passed 00:05:56.355 00:05:56.355 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.355 suites 1 1 n/a 0 0 00:05:56.355 tests 2 2 2 0 0 00:05:56.355 asserts 5297 5297 5297 0 n/a 00:05:56.355 00:05:56.355 Elapsed time = 1.650 seconds 00:05:56.355 EAL: request: mp_malloc_sync 00:05:56.355 EAL: No shared files mode enabled, IPC is disabled 00:05:56.355 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:56.355 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.355 EAL: request: mp_malloc_sync 00:05:56.355 EAL: No shared files mode enabled, IPC is disabled 00:05:56.355 EAL: Heap on socket 0 was shrunk by 2MB 00:05:56.355 EAL: No shared files mode enabled, IPC is disabled 00:05:56.355 EAL: No shared files mode enabled, IPC is disabled 00:05:56.355 EAL: No shared files mode enabled, IPC is disabled 00:05:56.355 00:05:56.355 real 0m1.855s 00:05:56.355 user 0m1.008s 00:05:56.355 sys 0m0.702s 00:05:56.355 ************************************ 00:05:56.355 END TEST env_vtophys 00:05:56.355 ************************************ 00:05:56.355 07:15:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.355 07:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:56.355 07:15:18 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:56.355 07:15:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.355 07:15:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.355 07:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:56.355 ************************************ 00:05:56.355 START TEST env_pci 00:05:56.355 ************************************ 00:05:56.355 07:15:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:56.355 00:05:56.355 00:05:56.355 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.355 http://cunit.sourceforge.net/ 00:05:56.355 00:05:56.355 00:05:56.355 Suite: pci 00:05:56.355 Test: pci_hook ...[2024-11-28 07:15:18.516597] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65756 has claimed it 00:05:56.355 passed 00:05:56.355 00:05:56.355 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.355 suites 1 1 n/a 0 0 00:05:56.355 tests 1 1 1 0 0 00:05:56.355 asserts 25 25 25 0 n/a 00:05:56.355 00:05:56.355 Elapsed time = 0.002 seconds 00:05:56.355 EAL: Cannot find device (10000:00:01.0) 00:05:56.355 EAL: Failed to attach device on primary process 00:05:56.355 ************************************ 00:05:56.355 END TEST env_pci 00:05:56.355 ************************************ 00:05:56.355 00:05:56.355 real 0m0.021s 00:05:56.355 user 0m0.010s 00:05:56.355 sys 0m0.010s 00:05:56.355 07:15:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.355 07:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:56.355 07:15:18 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:56.355 07:15:18 -- env/env.sh@15 -- # uname 00:05:56.355 07:15:18 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:56.355 07:15:18 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:56.355 07:15:18 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.355 07:15:18 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:56.355 07:15:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.355 07:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:56.355 ************************************ 00:05:56.355 START TEST env_dpdk_post_init 00:05:56.355 ************************************ 00:05:56.355 07:15:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.355 EAL: Detected CPU lcores: 10 00:05:56.355 EAL: Detected NUMA nodes: 1 00:05:56.355 EAL: Detected shared linkage of DPDK 00:05:56.355 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.355 EAL: Selected IOVA mode 'PA' 00:05:56.615 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.615 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:56.615 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:56.615 Starting DPDK initialization... 00:05:56.615 Starting SPDK post initialization... 00:05:56.615 SPDK NVMe probe 00:05:56.615 Attaching to 0000:00:06.0 00:05:56.615 Attaching to 0000:00:07.0 00:05:56.615 Attached to 0000:00:06.0 00:05:56.615 Attached to 0000:00:07.0 00:05:56.615 Cleaning up... 00:05:56.615 00:05:56.615 real 0m0.180s 00:05:56.615 user 0m0.042s 00:05:56.615 sys 0m0.038s 00:05:56.615 ************************************ 00:05:56.615 END TEST env_dpdk_post_init 00:05:56.615 ************************************ 00:05:56.615 07:15:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.615 07:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:56.615 07:15:18 -- env/env.sh@26 -- # uname 00:05:56.615 07:15:18 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:56.615 07:15:18 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.615 07:15:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.615 07:15:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.615 07:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:56.615 ************************************ 00:05:56.615 START TEST env_mem_callbacks 00:05:56.615 ************************************ 00:05:56.615 07:15:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:56.615 EAL: Detected CPU lcores: 10 00:05:56.615 EAL: Detected NUMA nodes: 1 00:05:56.615 EAL: Detected shared linkage of DPDK 00:05:56.615 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.615 EAL: Selected IOVA mode 'PA' 00:05:56.874 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.875 00:05:56.875 00:05:56.875 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.875 http://cunit.sourceforge.net/ 00:05:56.875 00:05:56.875 00:05:56.875 Suite: memory 00:05:56.875 Test: test ... 00:05:56.875 register 0x200000200000 2097152 00:05:56.875 malloc 3145728 00:05:56.875 register 0x200000400000 4194304 00:05:56.875 buf 0x200000500000 len 3145728 PASSED 00:05:56.875 malloc 64 00:05:56.875 buf 0x2000004fff40 len 64 PASSED 00:05:56.875 malloc 4194304 00:05:56.875 register 0x200000800000 6291456 00:05:56.875 buf 0x200000a00000 len 4194304 PASSED 00:05:56.875 free 0x200000500000 3145728 00:05:56.875 free 0x2000004fff40 64 00:05:56.875 unregister 0x200000400000 4194304 PASSED 00:05:56.875 free 0x200000a00000 4194304 00:05:56.875 unregister 0x200000800000 6291456 PASSED 00:05:56.875 malloc 8388608 00:05:56.875 register 0x200000400000 10485760 00:05:56.875 buf 0x200000600000 len 8388608 PASSED 00:05:56.875 free 0x200000600000 8388608 00:05:56.875 unregister 0x200000400000 10485760 PASSED 00:05:56.875 passed 00:05:56.875 00:05:56.875 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.875 suites 1 1 n/a 0 0 00:05:56.875 tests 1 1 1 0 0 00:05:56.875 asserts 15 15 15 0 n/a 00:05:56.875 00:05:56.875 Elapsed time = 0.008 seconds 00:05:56.875 00:05:56.875 real 0m0.142s 00:05:56.875 user 0m0.014s 00:05:56.875 sys 0m0.025s 00:05:56.875 07:15:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.875 ************************************ 00:05:56.875 END TEST env_mem_callbacks 00:05:56.875 ************************************ 00:05:56.875 07:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:56.875 00:05:56.875 real 0m2.896s 00:05:56.875 user 0m1.479s 00:05:56.875 sys 0m1.048s 00:05:56.875 07:15:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.875 07:15:19 -- common/autotest_common.sh@10 -- # set +x 00:05:56.875 ************************************ 00:05:56.875 END TEST env 00:05:56.875 ************************************ 00:05:56.875 07:15:19 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:56.875 07:15:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.875 07:15:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.875 07:15:19 -- common/autotest_common.sh@10 -- # set +x 00:05:56.875 ************************************ 00:05:56.875 START TEST rpc 00:05:56.875 ************************************ 00:05:56.875 07:15:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:56.875 * Looking for test storage... 00:05:56.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:56.875 07:15:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:56.875 07:15:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:56.875 07:15:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:57.134 07:15:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:57.134 07:15:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:57.134 07:15:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:57.134 07:15:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:57.134 07:15:19 -- scripts/common.sh@335 -- # IFS=.-: 00:05:57.134 07:15:19 -- scripts/common.sh@335 -- # read -ra ver1 00:05:57.134 07:15:19 -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.134 07:15:19 -- scripts/common.sh@336 -- # read -ra ver2 00:05:57.134 07:15:19 -- scripts/common.sh@337 -- # local 'op=<' 00:05:57.134 07:15:19 -- scripts/common.sh@339 -- # ver1_l=2 00:05:57.134 07:15:19 -- scripts/common.sh@340 -- # ver2_l=1 00:05:57.134 07:15:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:57.134 07:15:19 -- scripts/common.sh@343 -- # case "$op" in 00:05:57.134 07:15:19 -- scripts/common.sh@344 -- # : 1 00:05:57.134 07:15:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:57.134 07:15:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.134 07:15:19 -- scripts/common.sh@364 -- # decimal 1 00:05:57.134 07:15:19 -- scripts/common.sh@352 -- # local d=1 00:05:57.134 07:15:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.134 07:15:19 -- scripts/common.sh@354 -- # echo 1 00:05:57.134 07:15:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:57.134 07:15:19 -- scripts/common.sh@365 -- # decimal 2 00:05:57.134 07:15:19 -- scripts/common.sh@352 -- # local d=2 00:05:57.134 07:15:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.134 07:15:19 -- scripts/common.sh@354 -- # echo 2 00:05:57.135 07:15:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:57.135 07:15:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:57.135 07:15:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:57.135 07:15:19 -- scripts/common.sh@367 -- # return 0 00:05:57.135 07:15:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.135 07:15:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:57.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.135 --rc genhtml_branch_coverage=1 00:05:57.135 --rc genhtml_function_coverage=1 00:05:57.135 --rc genhtml_legend=1 00:05:57.135 --rc geninfo_all_blocks=1 00:05:57.135 --rc geninfo_unexecuted_blocks=1 00:05:57.135 00:05:57.135 ' 00:05:57.135 07:15:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:57.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.135 --rc genhtml_branch_coverage=1 00:05:57.135 --rc genhtml_function_coverage=1 00:05:57.135 --rc genhtml_legend=1 00:05:57.135 --rc geninfo_all_blocks=1 00:05:57.135 --rc geninfo_unexecuted_blocks=1 00:05:57.135 00:05:57.135 ' 00:05:57.135 07:15:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:57.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.135 --rc genhtml_branch_coverage=1 00:05:57.135 --rc genhtml_function_coverage=1 00:05:57.135 --rc genhtml_legend=1 00:05:57.135 --rc geninfo_all_blocks=1 00:05:57.135 --rc geninfo_unexecuted_blocks=1 00:05:57.135 00:05:57.135 ' 00:05:57.135 07:15:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:57.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.135 --rc genhtml_branch_coverage=1 00:05:57.135 --rc genhtml_function_coverage=1 00:05:57.135 --rc genhtml_legend=1 00:05:57.135 --rc geninfo_all_blocks=1 00:05:57.135 --rc geninfo_unexecuted_blocks=1 00:05:57.135 00:05:57.135 ' 00:05:57.135 07:15:19 -- rpc/rpc.sh@65 -- # spdk_pid=65878 00:05:57.135 07:15:19 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.135 07:15:19 -- rpc/rpc.sh@67 -- # waitforlisten 65878 00:05:57.135 07:15:19 -- common/autotest_common.sh@829 -- # '[' -z 65878 ']' 00:05:57.135 07:15:19 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:57.135 07:15:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.135 07:15:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.135 07:15:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.135 07:15:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.135 07:15:19 -- common/autotest_common.sh@10 -- # set +x 00:05:57.135 [2024-11-28 07:15:19.300091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.135 [2024-11-28 07:15:19.300758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65878 ] 00:05:57.394 [2024-11-28 07:15:19.440983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.394 [2024-11-28 07:15:19.535020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.394 [2024-11-28 07:15:19.535409] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:57.394 [2024-11-28 07:15:19.535559] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65878' to capture a snapshot of events at runtime. 00:05:57.394 [2024-11-28 07:15:19.535581] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65878 for offline analysis/debug. 00:05:57.394 [2024-11-28 07:15:19.535621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.358 07:15:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.358 07:15:20 -- common/autotest_common.sh@862 -- # return 0 00:05:58.358 07:15:20 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.358 07:15:20 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.358 07:15:20 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:58.358 07:15:20 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:58.358 07:15:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.358 07:15:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.358 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.358 ************************************ 00:05:58.358 START TEST rpc_integrity 00:05:58.358 ************************************ 00:05:58.358 07:15:20 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:58.358 07:15:20 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.358 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.358 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.359 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.359 07:15:20 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.359 07:15:20 -- rpc/rpc.sh@13 -- # jq length 00:05:58.359 07:15:20 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.359 07:15:20 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.359 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.359 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.359 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.359 07:15:20 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:58.359 07:15:20 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.359 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.359 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.359 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.359 07:15:20 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:58.359 { 00:05:58.359 "name": "Malloc0", 00:05:58.359 "aliases": [ 00:05:58.359 "6e1f3d9c-5cbd-4e90-a254-10ddc1f505f1" 00:05:58.359 ], 00:05:58.359 "product_name": "Malloc disk", 00:05:58.359 "block_size": 512, 00:05:58.359 "num_blocks": 16384, 00:05:58.359 "uuid": "6e1f3d9c-5cbd-4e90-a254-10ddc1f505f1", 00:05:58.359 "assigned_rate_limits": { 00:05:58.359 "rw_ios_per_sec": 0, 00:05:58.359 "rw_mbytes_per_sec": 0, 00:05:58.359 "r_mbytes_per_sec": 0, 00:05:58.359 "w_mbytes_per_sec": 0 00:05:58.359 }, 00:05:58.359 "claimed": false, 00:05:58.359 "zoned": false, 00:05:58.359 "supported_io_types": { 00:05:58.359 "read": true, 00:05:58.359 "write": true, 00:05:58.359 "unmap": true, 00:05:58.359 "write_zeroes": true, 00:05:58.359 "flush": true, 00:05:58.359 "reset": true, 00:05:58.359 "compare": false, 00:05:58.359 "compare_and_write": false, 00:05:58.359 "abort": true, 00:05:58.359 "nvme_admin": false, 00:05:58.359 "nvme_io": false 00:05:58.359 }, 00:05:58.359 "memory_domains": [ 00:05:58.359 { 00:05:58.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.359 "dma_device_type": 2 00:05:58.359 } 00:05:58.359 ], 00:05:58.359 "driver_specific": {} 00:05:58.359 } 00:05:58.359 ]' 00:05:58.359 07:15:20 -- rpc/rpc.sh@17 -- # jq length 00:05:58.359 07:15:20 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:58.359 07:15:20 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:58.359 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.359 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.359 [2024-11-28 07:15:20.511328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:58.359 [2024-11-28 07:15:20.511406] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.359 [2024-11-28 07:15:20.511432] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15e2030 00:05:58.359 [2024-11-28 07:15:20.511443] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.359 [2024-11-28 07:15:20.512999] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.359 [2024-11-28 07:15:20.513035] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:58.359 Passthru0 00:05:58.359 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.359 07:15:20 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:58.359 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.359 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.359 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.359 07:15:20 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:58.359 { 00:05:58.359 "name": "Malloc0", 00:05:58.359 "aliases": [ 00:05:58.359 "6e1f3d9c-5cbd-4e90-a254-10ddc1f505f1" 00:05:58.359 ], 00:05:58.359 "product_name": "Malloc disk", 00:05:58.359 "block_size": 512, 00:05:58.359 "num_blocks": 16384, 00:05:58.359 "uuid": "6e1f3d9c-5cbd-4e90-a254-10ddc1f505f1", 00:05:58.359 "assigned_rate_limits": { 00:05:58.359 "rw_ios_per_sec": 0, 00:05:58.359 "rw_mbytes_per_sec": 0, 00:05:58.359 "r_mbytes_per_sec": 0, 00:05:58.359 "w_mbytes_per_sec": 0 00:05:58.359 }, 00:05:58.359 "claimed": true, 00:05:58.359 "claim_type": "exclusive_write", 00:05:58.359 "zoned": false, 00:05:58.359 "supported_io_types": { 00:05:58.359 "read": true, 00:05:58.359 "write": true, 00:05:58.359 "unmap": true, 00:05:58.359 "write_zeroes": true, 00:05:58.359 "flush": true, 00:05:58.359 "reset": true, 00:05:58.359 "compare": false, 00:05:58.359 "compare_and_write": false, 00:05:58.359 "abort": true, 00:05:58.359 "nvme_admin": false, 00:05:58.359 "nvme_io": false 00:05:58.359 }, 00:05:58.359 "memory_domains": [ 00:05:58.359 { 00:05:58.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.359 "dma_device_type": 2 00:05:58.359 } 00:05:58.359 ], 00:05:58.359 "driver_specific": {} 00:05:58.359 }, 00:05:58.359 { 00:05:58.359 "name": "Passthru0", 00:05:58.359 "aliases": [ 00:05:58.359 "94875c01-78f9-5ef6-b7bb-5526f0132cb7" 00:05:58.359 ], 00:05:58.359 "product_name": "passthru", 00:05:58.359 "block_size": 512, 00:05:58.359 "num_blocks": 16384, 00:05:58.359 "uuid": "94875c01-78f9-5ef6-b7bb-5526f0132cb7", 00:05:58.359 "assigned_rate_limits": { 00:05:58.359 "rw_ios_per_sec": 0, 00:05:58.359 "rw_mbytes_per_sec": 0, 00:05:58.359 "r_mbytes_per_sec": 0, 00:05:58.359 "w_mbytes_per_sec": 0 00:05:58.359 }, 00:05:58.359 "claimed": false, 00:05:58.359 "zoned": false, 00:05:58.359 "supported_io_types": { 00:05:58.359 "read": true, 00:05:58.359 "write": true, 00:05:58.359 "unmap": true, 00:05:58.359 "write_zeroes": true, 00:05:58.359 "flush": true, 00:05:58.359 "reset": true, 00:05:58.359 "compare": false, 00:05:58.359 "compare_and_write": false, 00:05:58.359 "abort": true, 00:05:58.359 "nvme_admin": false, 00:05:58.359 "nvme_io": false 00:05:58.359 }, 00:05:58.359 "memory_domains": [ 00:05:58.359 { 00:05:58.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.359 "dma_device_type": 2 00:05:58.359 } 00:05:58.359 ], 00:05:58.359 "driver_specific": { 00:05:58.359 "passthru": { 00:05:58.359 "name": "Passthru0", 00:05:58.359 "base_bdev_name": "Malloc0" 00:05:58.359 } 00:05:58.359 } 00:05:58.359 } 00:05:58.359 ]' 00:05:58.359 07:15:20 -- rpc/rpc.sh@21 -- # jq length 00:05:58.359 07:15:20 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:58.359 07:15:20 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:58.359 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.359 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.359 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.360 07:15:20 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:58.360 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.360 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.360 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.360 07:15:20 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:58.360 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.360 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.360 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.360 07:15:20 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:58.360 07:15:20 -- rpc/rpc.sh@26 -- # jq length 00:05:58.620 ************************************ 00:05:58.620 END TEST rpc_integrity 00:05:58.620 ************************************ 00:05:58.620 07:15:20 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:58.620 00:05:58.620 real 0m0.327s 00:05:58.620 user 0m0.221s 00:05:58.620 sys 0m0.038s 00:05:58.620 07:15:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.620 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.620 07:15:20 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:58.620 07:15:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.620 07:15:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.620 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.620 ************************************ 00:05:58.620 START TEST rpc_plugins 00:05:58.620 ************************************ 00:05:58.620 07:15:20 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:58.620 07:15:20 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:58.620 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.620 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.620 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.620 07:15:20 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:58.620 07:15:20 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:58.620 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.620 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.620 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.620 07:15:20 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:58.620 { 00:05:58.620 "name": "Malloc1", 00:05:58.620 "aliases": [ 00:05:58.620 "8a5c25f9-d44c-4241-b3b5-74bc32af791c" 00:05:58.620 ], 00:05:58.620 "product_name": "Malloc disk", 00:05:58.620 "block_size": 4096, 00:05:58.620 "num_blocks": 256, 00:05:58.620 "uuid": "8a5c25f9-d44c-4241-b3b5-74bc32af791c", 00:05:58.620 "assigned_rate_limits": { 00:05:58.620 "rw_ios_per_sec": 0, 00:05:58.620 "rw_mbytes_per_sec": 0, 00:05:58.620 "r_mbytes_per_sec": 0, 00:05:58.620 "w_mbytes_per_sec": 0 00:05:58.620 }, 00:05:58.620 "claimed": false, 00:05:58.620 "zoned": false, 00:05:58.620 "supported_io_types": { 00:05:58.620 "read": true, 00:05:58.620 "write": true, 00:05:58.620 "unmap": true, 00:05:58.620 "write_zeroes": true, 00:05:58.620 "flush": true, 00:05:58.620 "reset": true, 00:05:58.620 "compare": false, 00:05:58.620 "compare_and_write": false, 00:05:58.620 "abort": true, 00:05:58.620 "nvme_admin": false, 00:05:58.620 "nvme_io": false 00:05:58.620 }, 00:05:58.620 "memory_domains": [ 00:05:58.620 { 00:05:58.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.620 "dma_device_type": 2 00:05:58.620 } 00:05:58.620 ], 00:05:58.620 "driver_specific": {} 00:05:58.620 } 00:05:58.620 ]' 00:05:58.620 07:15:20 -- rpc/rpc.sh@32 -- # jq length 00:05:58.620 07:15:20 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:58.620 07:15:20 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:58.620 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.620 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.620 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.620 07:15:20 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:58.620 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.620 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.620 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.620 07:15:20 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:58.620 07:15:20 -- rpc/rpc.sh@36 -- # jq length 00:05:58.879 ************************************ 00:05:58.879 END TEST rpc_plugins 00:05:58.879 ************************************ 00:05:58.879 07:15:20 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:58.879 00:05:58.879 real 0m0.164s 00:05:58.879 user 0m0.105s 00:05:58.879 sys 0m0.024s 00:05:58.879 07:15:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:58.879 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.879 07:15:20 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:58.879 07:15:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.879 07:15:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.879 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.879 ************************************ 00:05:58.879 START TEST rpc_trace_cmd_test 00:05:58.879 ************************************ 00:05:58.879 07:15:20 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:58.879 07:15:20 -- rpc/rpc.sh@40 -- # local info 00:05:58.879 07:15:20 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:58.879 07:15:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.879 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:05:58.879 07:15:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.879 07:15:20 -- rpc/rpc.sh@42 -- # info='{ 00:05:58.879 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65878", 00:05:58.879 "tpoint_group_mask": "0x8", 00:05:58.879 "iscsi_conn": { 00:05:58.879 "mask": "0x2", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 }, 00:05:58.879 "scsi": { 00:05:58.879 "mask": "0x4", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 }, 00:05:58.879 "bdev": { 00:05:58.879 "mask": "0x8", 00:05:58.879 "tpoint_mask": "0xffffffffffffffff" 00:05:58.879 }, 00:05:58.879 "nvmf_rdma": { 00:05:58.879 "mask": "0x10", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 }, 00:05:58.879 "nvmf_tcp": { 00:05:58.879 "mask": "0x20", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 }, 00:05:58.879 "ftl": { 00:05:58.879 "mask": "0x40", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 }, 00:05:58.879 "blobfs": { 00:05:58.879 "mask": "0x80", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 }, 00:05:58.879 "dsa": { 00:05:58.879 "mask": "0x200", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 }, 00:05:58.879 "thread": { 00:05:58.879 "mask": "0x400", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 }, 00:05:58.879 "nvme_pcie": { 00:05:58.879 "mask": "0x800", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 }, 00:05:58.879 "iaa": { 00:05:58.879 "mask": "0x1000", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 }, 00:05:58.879 "nvme_tcp": { 00:05:58.879 "mask": "0x2000", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 }, 00:05:58.879 "bdev_nvme": { 00:05:58.879 "mask": "0x4000", 00:05:58.879 "tpoint_mask": "0x0" 00:05:58.879 } 00:05:58.879 }' 00:05:58.879 07:15:20 -- rpc/rpc.sh@43 -- # jq length 00:05:58.879 07:15:21 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:58.879 07:15:21 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:58.879 07:15:21 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:58.879 07:15:21 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:58.879 07:15:21 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:58.879 07:15:21 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:59.138 07:15:21 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:59.138 07:15:21 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:59.138 ************************************ 00:05:59.138 END TEST rpc_trace_cmd_test 00:05:59.138 ************************************ 00:05:59.138 07:15:21 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:59.138 00:05:59.138 real 0m0.282s 00:05:59.138 user 0m0.238s 00:05:59.138 sys 0m0.031s 00:05:59.138 07:15:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.138 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.138 07:15:21 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:59.138 07:15:21 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:59.138 07:15:21 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:59.138 07:15:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.138 07:15:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.138 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.138 ************************************ 00:05:59.138 START TEST rpc_daemon_integrity 00:05:59.138 ************************************ 00:05:59.138 07:15:21 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:59.138 07:15:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:59.138 07:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.138 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.138 07:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.138 07:15:21 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:59.138 07:15:21 -- rpc/rpc.sh@13 -- # jq length 00:05:59.138 07:15:21 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:59.138 07:15:21 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:59.138 07:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.138 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.138 07:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.138 07:15:21 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:59.138 07:15:21 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:59.138 07:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.139 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.139 07:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.139 07:15:21 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:59.139 { 00:05:59.139 "name": "Malloc2", 00:05:59.139 "aliases": [ 00:05:59.139 "cbf31d1d-3130-4f17-85bc-192e599310bd" 00:05:59.139 ], 00:05:59.139 "product_name": "Malloc disk", 00:05:59.139 "block_size": 512, 00:05:59.139 "num_blocks": 16384, 00:05:59.139 "uuid": "cbf31d1d-3130-4f17-85bc-192e599310bd", 00:05:59.139 "assigned_rate_limits": { 00:05:59.139 "rw_ios_per_sec": 0, 00:05:59.139 "rw_mbytes_per_sec": 0, 00:05:59.139 "r_mbytes_per_sec": 0, 00:05:59.139 "w_mbytes_per_sec": 0 00:05:59.139 }, 00:05:59.139 "claimed": false, 00:05:59.139 "zoned": false, 00:05:59.139 "supported_io_types": { 00:05:59.139 "read": true, 00:05:59.139 "write": true, 00:05:59.139 "unmap": true, 00:05:59.139 "write_zeroes": true, 00:05:59.139 "flush": true, 00:05:59.139 "reset": true, 00:05:59.139 "compare": false, 00:05:59.139 "compare_and_write": false, 00:05:59.139 "abort": true, 00:05:59.139 "nvme_admin": false, 00:05:59.139 "nvme_io": false 00:05:59.139 }, 00:05:59.139 "memory_domains": [ 00:05:59.139 { 00:05:59.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.139 "dma_device_type": 2 00:05:59.139 } 00:05:59.139 ], 00:05:59.139 "driver_specific": {} 00:05:59.139 } 00:05:59.139 ]' 00:05:59.139 07:15:21 -- rpc/rpc.sh@17 -- # jq length 00:05:59.398 07:15:21 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:59.398 07:15:21 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:59.398 07:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.398 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.398 [2024-11-28 07:15:21.427846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:59.398 [2024-11-28 07:15:21.427900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:59.398 [2024-11-28 07:15:21.427920] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15e29d0 00:05:59.398 [2024-11-28 07:15:21.427929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:59.398 [2024-11-28 07:15:21.429430] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:59.398 [2024-11-28 07:15:21.429465] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:59.398 Passthru0 00:05:59.398 07:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.398 07:15:21 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:59.398 07:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.398 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.398 07:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.398 07:15:21 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:59.398 { 00:05:59.398 "name": "Malloc2", 00:05:59.398 "aliases": [ 00:05:59.398 "cbf31d1d-3130-4f17-85bc-192e599310bd" 00:05:59.398 ], 00:05:59.398 "product_name": "Malloc disk", 00:05:59.398 "block_size": 512, 00:05:59.398 "num_blocks": 16384, 00:05:59.398 "uuid": "cbf31d1d-3130-4f17-85bc-192e599310bd", 00:05:59.398 "assigned_rate_limits": { 00:05:59.398 "rw_ios_per_sec": 0, 00:05:59.398 "rw_mbytes_per_sec": 0, 00:05:59.398 "r_mbytes_per_sec": 0, 00:05:59.398 "w_mbytes_per_sec": 0 00:05:59.398 }, 00:05:59.398 "claimed": true, 00:05:59.398 "claim_type": "exclusive_write", 00:05:59.398 "zoned": false, 00:05:59.398 "supported_io_types": { 00:05:59.398 "read": true, 00:05:59.398 "write": true, 00:05:59.398 "unmap": true, 00:05:59.398 "write_zeroes": true, 00:05:59.398 "flush": true, 00:05:59.398 "reset": true, 00:05:59.398 "compare": false, 00:05:59.398 "compare_and_write": false, 00:05:59.398 "abort": true, 00:05:59.398 "nvme_admin": false, 00:05:59.398 "nvme_io": false 00:05:59.398 }, 00:05:59.398 "memory_domains": [ 00:05:59.398 { 00:05:59.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.398 "dma_device_type": 2 00:05:59.398 } 00:05:59.398 ], 00:05:59.398 "driver_specific": {} 00:05:59.398 }, 00:05:59.398 { 00:05:59.398 "name": "Passthru0", 00:05:59.398 "aliases": [ 00:05:59.398 "9d607503-69ee-59da-9fd1-411a8551af1c" 00:05:59.398 ], 00:05:59.398 "product_name": "passthru", 00:05:59.398 "block_size": 512, 00:05:59.398 "num_blocks": 16384, 00:05:59.398 "uuid": "9d607503-69ee-59da-9fd1-411a8551af1c", 00:05:59.398 "assigned_rate_limits": { 00:05:59.398 "rw_ios_per_sec": 0, 00:05:59.398 "rw_mbytes_per_sec": 0, 00:05:59.398 "r_mbytes_per_sec": 0, 00:05:59.398 "w_mbytes_per_sec": 0 00:05:59.398 }, 00:05:59.398 "claimed": false, 00:05:59.398 "zoned": false, 00:05:59.398 "supported_io_types": { 00:05:59.398 "read": true, 00:05:59.398 "write": true, 00:05:59.398 "unmap": true, 00:05:59.398 "write_zeroes": true, 00:05:59.398 "flush": true, 00:05:59.398 "reset": true, 00:05:59.398 "compare": false, 00:05:59.398 "compare_and_write": false, 00:05:59.398 "abort": true, 00:05:59.398 "nvme_admin": false, 00:05:59.398 "nvme_io": false 00:05:59.398 }, 00:05:59.398 "memory_domains": [ 00:05:59.398 { 00:05:59.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.398 "dma_device_type": 2 00:05:59.398 } 00:05:59.398 ], 00:05:59.398 "driver_specific": { 00:05:59.398 "passthru": { 00:05:59.398 "name": "Passthru0", 00:05:59.398 "base_bdev_name": "Malloc2" 00:05:59.398 } 00:05:59.398 } 00:05:59.398 } 00:05:59.398 ]' 00:05:59.398 07:15:21 -- rpc/rpc.sh@21 -- # jq length 00:05:59.398 07:15:21 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:59.398 07:15:21 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:59.398 07:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.398 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.398 07:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.398 07:15:21 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:59.399 07:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.399 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.399 07:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.399 07:15:21 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:59.399 07:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.399 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.399 07:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.399 07:15:21 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:59.399 07:15:21 -- rpc/rpc.sh@26 -- # jq length 00:05:59.399 ************************************ 00:05:59.399 END TEST rpc_daemon_integrity 00:05:59.399 ************************************ 00:05:59.399 07:15:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:59.399 00:05:59.399 real 0m0.317s 00:05:59.399 user 0m0.217s 00:05:59.399 sys 0m0.034s 00:05:59.399 07:15:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.399 07:15:21 -- common/autotest_common.sh@10 -- # set +x 00:05:59.399 07:15:21 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:59.399 07:15:21 -- rpc/rpc.sh@84 -- # killprocess 65878 00:05:59.399 07:15:21 -- common/autotest_common.sh@936 -- # '[' -z 65878 ']' 00:05:59.399 07:15:21 -- common/autotest_common.sh@940 -- # kill -0 65878 00:05:59.399 07:15:21 -- common/autotest_common.sh@941 -- # uname 00:05:59.399 07:15:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.399 07:15:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65878 00:05:59.658 killing process with pid 65878 00:05:59.658 07:15:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.658 07:15:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.658 07:15:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65878' 00:05:59.658 07:15:21 -- common/autotest_common.sh@955 -- # kill 65878 00:05:59.658 07:15:21 -- common/autotest_common.sh@960 -- # wait 65878 00:05:59.917 00:05:59.917 real 0m2.988s 00:05:59.917 user 0m3.873s 00:05:59.917 sys 0m0.714s 00:05:59.917 07:15:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.917 07:15:22 -- common/autotest_common.sh@10 -- # set +x 00:05:59.917 ************************************ 00:05:59.917 END TEST rpc 00:05:59.917 ************************************ 00:05:59.917 07:15:22 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:59.917 07:15:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.917 07:15:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.917 07:15:22 -- common/autotest_common.sh@10 -- # set +x 00:05:59.917 ************************************ 00:05:59.917 START TEST rpc_client 00:05:59.917 ************************************ 00:05:59.917 07:15:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:59.917 * Looking for test storage... 00:05:59.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:59.917 07:15:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:59.917 07:15:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:59.917 07:15:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:00.176 07:15:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:00.176 07:15:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:00.176 07:15:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:00.176 07:15:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:00.176 07:15:22 -- scripts/common.sh@335 -- # IFS=.-: 00:06:00.176 07:15:22 -- scripts/common.sh@335 -- # read -ra ver1 00:06:00.176 07:15:22 -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.176 07:15:22 -- scripts/common.sh@336 -- # read -ra ver2 00:06:00.176 07:15:22 -- scripts/common.sh@337 -- # local 'op=<' 00:06:00.176 07:15:22 -- scripts/common.sh@339 -- # ver1_l=2 00:06:00.176 07:15:22 -- scripts/common.sh@340 -- # ver2_l=1 00:06:00.176 07:15:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:00.176 07:15:22 -- scripts/common.sh@343 -- # case "$op" in 00:06:00.176 07:15:22 -- scripts/common.sh@344 -- # : 1 00:06:00.176 07:15:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:00.176 07:15:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.176 07:15:22 -- scripts/common.sh@364 -- # decimal 1 00:06:00.176 07:15:22 -- scripts/common.sh@352 -- # local d=1 00:06:00.176 07:15:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.176 07:15:22 -- scripts/common.sh@354 -- # echo 1 00:06:00.176 07:15:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:00.176 07:15:22 -- scripts/common.sh@365 -- # decimal 2 00:06:00.176 07:15:22 -- scripts/common.sh@352 -- # local d=2 00:06:00.176 07:15:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.176 07:15:22 -- scripts/common.sh@354 -- # echo 2 00:06:00.176 07:15:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:00.176 07:15:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:00.176 07:15:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:00.176 07:15:22 -- scripts/common.sh@367 -- # return 0 00:06:00.176 07:15:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.177 07:15:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:00.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.177 --rc genhtml_branch_coverage=1 00:06:00.177 --rc genhtml_function_coverage=1 00:06:00.177 --rc genhtml_legend=1 00:06:00.177 --rc geninfo_all_blocks=1 00:06:00.177 --rc geninfo_unexecuted_blocks=1 00:06:00.177 00:06:00.177 ' 00:06:00.177 07:15:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:00.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.177 --rc genhtml_branch_coverage=1 00:06:00.177 --rc genhtml_function_coverage=1 00:06:00.177 --rc genhtml_legend=1 00:06:00.177 --rc geninfo_all_blocks=1 00:06:00.177 --rc geninfo_unexecuted_blocks=1 00:06:00.177 00:06:00.177 ' 00:06:00.177 07:15:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:00.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.177 --rc genhtml_branch_coverage=1 00:06:00.177 --rc genhtml_function_coverage=1 00:06:00.177 --rc genhtml_legend=1 00:06:00.177 --rc geninfo_all_blocks=1 00:06:00.177 --rc geninfo_unexecuted_blocks=1 00:06:00.177 00:06:00.177 ' 00:06:00.177 07:15:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:00.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.177 --rc genhtml_branch_coverage=1 00:06:00.177 --rc genhtml_function_coverage=1 00:06:00.177 --rc genhtml_legend=1 00:06:00.177 --rc geninfo_all_blocks=1 00:06:00.177 --rc geninfo_unexecuted_blocks=1 00:06:00.177 00:06:00.177 ' 00:06:00.177 07:15:22 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:00.177 OK 00:06:00.177 07:15:22 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:00.177 00:06:00.177 real 0m0.208s 00:06:00.177 user 0m0.122s 00:06:00.177 sys 0m0.093s 00:06:00.177 ************************************ 00:06:00.177 END TEST rpc_client 00:06:00.177 ************************************ 00:06:00.177 07:15:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.177 07:15:22 -- common/autotest_common.sh@10 -- # set +x 00:06:00.177 07:15:22 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:00.177 07:15:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.177 07:15:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.177 07:15:22 -- common/autotest_common.sh@10 -- # set +x 00:06:00.177 ************************************ 00:06:00.177 START TEST json_config 00:06:00.177 ************************************ 00:06:00.177 07:15:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:00.177 07:15:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:00.177 07:15:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:00.177 07:15:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:00.436 07:15:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:00.436 07:15:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:00.436 07:15:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:00.436 07:15:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:00.436 07:15:22 -- scripts/common.sh@335 -- # IFS=.-: 00:06:00.436 07:15:22 -- scripts/common.sh@335 -- # read -ra ver1 00:06:00.436 07:15:22 -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.436 07:15:22 -- scripts/common.sh@336 -- # read -ra ver2 00:06:00.436 07:15:22 -- scripts/common.sh@337 -- # local 'op=<' 00:06:00.436 07:15:22 -- scripts/common.sh@339 -- # ver1_l=2 00:06:00.436 07:15:22 -- scripts/common.sh@340 -- # ver2_l=1 00:06:00.436 07:15:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:00.436 07:15:22 -- scripts/common.sh@343 -- # case "$op" in 00:06:00.436 07:15:22 -- scripts/common.sh@344 -- # : 1 00:06:00.436 07:15:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:00.436 07:15:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.436 07:15:22 -- scripts/common.sh@364 -- # decimal 1 00:06:00.436 07:15:22 -- scripts/common.sh@352 -- # local d=1 00:06:00.436 07:15:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.436 07:15:22 -- scripts/common.sh@354 -- # echo 1 00:06:00.436 07:15:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:00.436 07:15:22 -- scripts/common.sh@365 -- # decimal 2 00:06:00.436 07:15:22 -- scripts/common.sh@352 -- # local d=2 00:06:00.436 07:15:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.436 07:15:22 -- scripts/common.sh@354 -- # echo 2 00:06:00.436 07:15:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:00.436 07:15:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:00.436 07:15:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:00.436 07:15:22 -- scripts/common.sh@367 -- # return 0 00:06:00.436 07:15:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.436 07:15:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:00.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.436 --rc genhtml_branch_coverage=1 00:06:00.436 --rc genhtml_function_coverage=1 00:06:00.436 --rc genhtml_legend=1 00:06:00.436 --rc geninfo_all_blocks=1 00:06:00.436 --rc geninfo_unexecuted_blocks=1 00:06:00.436 00:06:00.436 ' 00:06:00.436 07:15:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:00.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.437 --rc genhtml_branch_coverage=1 00:06:00.437 --rc genhtml_function_coverage=1 00:06:00.437 --rc genhtml_legend=1 00:06:00.437 --rc geninfo_all_blocks=1 00:06:00.437 --rc geninfo_unexecuted_blocks=1 00:06:00.437 00:06:00.437 ' 00:06:00.437 07:15:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:00.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.437 --rc genhtml_branch_coverage=1 00:06:00.437 --rc genhtml_function_coverage=1 00:06:00.437 --rc genhtml_legend=1 00:06:00.437 --rc geninfo_all_blocks=1 00:06:00.437 --rc geninfo_unexecuted_blocks=1 00:06:00.437 00:06:00.437 ' 00:06:00.437 07:15:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:00.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.437 --rc genhtml_branch_coverage=1 00:06:00.437 --rc genhtml_function_coverage=1 00:06:00.437 --rc genhtml_legend=1 00:06:00.437 --rc geninfo_all_blocks=1 00:06:00.437 --rc geninfo_unexecuted_blocks=1 00:06:00.437 00:06:00.437 ' 00:06:00.437 07:15:22 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:00.437 07:15:22 -- nvmf/common.sh@7 -- # uname -s 00:06:00.437 07:15:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.437 07:15:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.437 07:15:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.437 07:15:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.437 07:15:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.437 07:15:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.437 07:15:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.437 07:15:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.437 07:15:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.437 07:15:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.437 07:15:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:06:00.437 07:15:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:06:00.437 07:15:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.437 07:15:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.437 07:15:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:00.437 07:15:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.437 07:15:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.437 07:15:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.437 07:15:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.437 07:15:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.437 07:15:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.437 07:15:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.437 07:15:22 -- paths/export.sh@5 -- # export PATH 00:06:00.437 07:15:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.437 07:15:22 -- nvmf/common.sh@46 -- # : 0 00:06:00.437 07:15:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:00.437 07:15:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:00.437 07:15:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:00.437 07:15:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.437 07:15:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.437 07:15:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:00.437 07:15:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:00.437 07:15:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:00.437 07:15:22 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:00.437 07:15:22 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:00.437 07:15:22 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:00.437 07:15:22 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:00.437 07:15:22 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:06:00.437 07:15:22 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:00.437 07:15:22 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:00.437 07:15:22 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:00.437 07:15:22 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:00.437 07:15:22 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:00.437 07:15:22 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:00.437 07:15:22 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:00.437 07:15:22 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:00.437 07:15:22 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:00.437 07:15:22 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:00.437 INFO: JSON configuration test init 00:06:00.437 07:15:22 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:00.437 07:15:22 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:00.437 07:15:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.437 07:15:22 -- common/autotest_common.sh@10 -- # set +x 00:06:00.437 07:15:22 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:00.437 07:15:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.437 07:15:22 -- common/autotest_common.sh@10 -- # set +x 00:06:00.437 Waiting for target to run... 00:06:00.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:00.437 07:15:22 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:00.437 07:15:22 -- json_config/json_config.sh@98 -- # local app=target 00:06:00.437 07:15:22 -- json_config/json_config.sh@99 -- # shift 00:06:00.437 07:15:22 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:00.437 07:15:22 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:00.437 07:15:22 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:00.437 07:15:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:00.437 07:15:22 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:00.437 07:15:22 -- json_config/json_config.sh@111 -- # app_pid[$app]=66131 00:06:00.437 07:15:22 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:00.437 07:15:22 -- json_config/json_config.sh@114 -- # waitforlisten 66131 /var/tmp/spdk_tgt.sock 00:06:00.437 07:15:22 -- common/autotest_common.sh@829 -- # '[' -z 66131 ']' 00:06:00.437 07:15:22 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:00.437 07:15:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:00.437 07:15:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.437 07:15:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:00.437 07:15:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.437 07:15:22 -- common/autotest_common.sh@10 -- # set +x 00:06:00.438 [2024-11-28 07:15:22.632582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.438 [2024-11-28 07:15:22.632962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66131 ] 00:06:01.006 [2024-11-28 07:15:23.075706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.006 [2024-11-28 07:15:23.149135] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:01.006 [2024-11-28 07:15:23.149649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.574 07:15:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.574 07:15:23 -- common/autotest_common.sh@862 -- # return 0 00:06:01.574 07:15:23 -- json_config/json_config.sh@115 -- # echo '' 00:06:01.574 00:06:01.574 07:15:23 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:01.574 07:15:23 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:01.574 07:15:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.574 07:15:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.574 07:15:23 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:01.574 07:15:23 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:01.574 07:15:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.574 07:15:23 -- common/autotest_common.sh@10 -- # set +x 00:06:01.574 07:15:23 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:01.574 07:15:23 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:01.574 07:15:23 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:02.142 07:15:24 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:02.142 07:15:24 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:02.142 07:15:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.142 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:02.142 07:15:24 -- json_config/json_config.sh@48 -- # local ret=0 00:06:02.142 07:15:24 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:02.142 07:15:24 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:02.142 07:15:24 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:02.142 07:15:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:02.142 07:15:24 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:02.400 07:15:24 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:02.401 07:15:24 -- json_config/json_config.sh@51 -- # local get_types 00:06:02.401 07:15:24 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:02.401 07:15:24 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:02.401 07:15:24 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:02.401 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:02.401 07:15:24 -- json_config/json_config.sh@58 -- # return 0 00:06:02.401 07:15:24 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:02.401 07:15:24 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:02.401 07:15:24 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:02.401 07:15:24 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:02.401 07:15:24 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:02.401 07:15:24 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:02.401 07:15:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.401 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:02.401 07:15:24 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:02.401 07:15:24 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:06:02.401 07:15:24 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:06:02.401 07:15:24 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:02.401 07:15:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:02.660 MallocForNvmf0 00:06:02.660 07:15:24 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:02.660 07:15:24 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:02.919 MallocForNvmf1 00:06:02.919 07:15:25 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:02.919 07:15:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:03.488 [2024-11-28 07:15:25.483791] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.488 07:15:25 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:03.488 07:15:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:03.747 07:15:25 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:03.747 07:15:25 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:04.027 07:15:26 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.027 07:15:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.284 07:15:26 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:04.284 07:15:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:04.542 [2024-11-28 07:15:26.740879] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:04.542 07:15:26 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:04.542 07:15:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.542 07:15:26 -- common/autotest_common.sh@10 -- # set +x 00:06:04.542 07:15:26 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:04.542 07:15:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.542 07:15:26 -- common/autotest_common.sh@10 -- # set +x 00:06:04.800 07:15:26 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:04.800 07:15:26 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:04.800 07:15:26 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:05.102 MallocBdevForConfigChangeCheck 00:06:05.102 07:15:27 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:05.102 07:15:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.102 07:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:05.102 07:15:27 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:05.102 07:15:27 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.360 INFO: shutting down applications... 00:06:05.360 07:15:27 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:05.360 07:15:27 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:05.360 07:15:27 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:05.360 07:15:27 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:05.360 07:15:27 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:05.926 Calling clear_iscsi_subsystem 00:06:05.926 Calling clear_nvmf_subsystem 00:06:05.926 Calling clear_nbd_subsystem 00:06:05.926 Calling clear_ublk_subsystem 00:06:05.926 Calling clear_vhost_blk_subsystem 00:06:05.926 Calling clear_vhost_scsi_subsystem 00:06:05.926 Calling clear_scheduler_subsystem 00:06:05.926 Calling clear_bdev_subsystem 00:06:05.926 Calling clear_accel_subsystem 00:06:05.926 Calling clear_vmd_subsystem 00:06:05.926 Calling clear_sock_subsystem 00:06:05.926 Calling clear_iobuf_subsystem 00:06:05.926 07:15:27 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:05.926 07:15:27 -- json_config/json_config.sh@396 -- # count=100 00:06:05.926 07:15:27 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:05.926 07:15:27 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.926 07:15:27 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:05.926 07:15:27 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:06.185 07:15:28 -- json_config/json_config.sh@398 -- # break 00:06:06.185 07:15:28 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:06.185 07:15:28 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:06.185 07:15:28 -- json_config/json_config.sh@120 -- # local app=target 00:06:06.185 07:15:28 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:06.185 07:15:28 -- json_config/json_config.sh@124 -- # [[ -n 66131 ]] 00:06:06.185 07:15:28 -- json_config/json_config.sh@127 -- # kill -SIGINT 66131 00:06:06.185 07:15:28 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:06.185 07:15:28 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:06.185 07:15:28 -- json_config/json_config.sh@130 -- # kill -0 66131 00:06:06.185 07:15:28 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:06.752 07:15:28 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:06.752 SPDK target shutdown done 00:06:06.752 INFO: relaunching applications... 00:06:06.752 07:15:28 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:06.752 07:15:28 -- json_config/json_config.sh@130 -- # kill -0 66131 00:06:06.752 07:15:28 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:06.752 07:15:28 -- json_config/json_config.sh@132 -- # break 00:06:06.752 07:15:28 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:06.752 07:15:28 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:06.752 07:15:28 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:06.752 07:15:28 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:06.752 07:15:28 -- json_config/json_config.sh@98 -- # local app=target 00:06:06.752 07:15:28 -- json_config/json_config.sh@99 -- # shift 00:06:06.752 07:15:28 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:06.752 07:15:28 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:06.752 07:15:28 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:06.752 07:15:28 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:06.752 07:15:28 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:06.752 07:15:28 -- json_config/json_config.sh@111 -- # app_pid[$app]=66327 00:06:06.752 07:15:28 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:06.752 07:15:28 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:06.752 Waiting for target to run... 00:06:06.752 07:15:28 -- json_config/json_config.sh@114 -- # waitforlisten 66327 /var/tmp/spdk_tgt.sock 00:06:06.752 07:15:28 -- common/autotest_common.sh@829 -- # '[' -z 66327 ']' 00:06:06.752 07:15:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.752 07:15:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.752 07:15:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.752 07:15:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.752 07:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:06.752 [2024-11-28 07:15:28.941894] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.752 [2024-11-28 07:15:28.942463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66327 ] 00:06:07.319 [2024-11-28 07:15:29.377917] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.319 [2024-11-28 07:15:29.474240] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.319 [2024-11-28 07:15:29.474456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.579 [2024-11-28 07:15:29.802319] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:07.579 [2024-11-28 07:15:29.834426] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:07.838 00:06:07.838 INFO: Checking if target configuration is the same... 00:06:07.838 07:15:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.838 07:15:29 -- common/autotest_common.sh@862 -- # return 0 00:06:07.838 07:15:29 -- json_config/json_config.sh@115 -- # echo '' 00:06:07.838 07:15:29 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:07.838 07:15:29 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:07.838 07:15:29 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:07.838 07:15:29 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:07.838 07:15:29 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.838 + '[' 2 -ne 2 ']' 00:06:07.838 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:07.838 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:07.838 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:07.838 +++ basename /dev/fd/62 00:06:07.838 ++ mktemp /tmp/62.XXX 00:06:07.838 + tmp_file_1=/tmp/62.zpm 00:06:07.838 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:07.838 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:07.838 + tmp_file_2=/tmp/spdk_tgt_config.json.x66 00:06:07.838 + ret=0 00:06:07.838 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:08.097 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:08.097 + diff -u /tmp/62.zpm /tmp/spdk_tgt_config.json.x66 00:06:08.097 INFO: JSON config files are the same 00:06:08.097 + echo 'INFO: JSON config files are the same' 00:06:08.097 + rm /tmp/62.zpm /tmp/spdk_tgt_config.json.x66 00:06:08.097 + exit 0 00:06:08.097 INFO: changing configuration and checking if this can be detected... 00:06:08.097 07:15:30 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:08.097 07:15:30 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:08.097 07:15:30 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:08.097 07:15:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:08.356 07:15:30 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:08.356 07:15:30 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:08.356 07:15:30 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:08.356 + '[' 2 -ne 2 ']' 00:06:08.356 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:08.356 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:08.356 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:08.356 +++ basename /dev/fd/62 00:06:08.356 ++ mktemp /tmp/62.XXX 00:06:08.356 + tmp_file_1=/tmp/62.Jpw 00:06:08.356 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:08.356 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:08.356 + tmp_file_2=/tmp/spdk_tgt_config.json.qj5 00:06:08.356 + ret=0 00:06:08.356 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:08.924 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:08.924 + diff -u /tmp/62.Jpw /tmp/spdk_tgt_config.json.qj5 00:06:08.924 + ret=1 00:06:08.924 + echo '=== Start of file: /tmp/62.Jpw ===' 00:06:08.924 + cat /tmp/62.Jpw 00:06:08.924 + echo '=== End of file: /tmp/62.Jpw ===' 00:06:08.924 + echo '' 00:06:08.924 + echo '=== Start of file: /tmp/spdk_tgt_config.json.qj5 ===' 00:06:08.924 + cat /tmp/spdk_tgt_config.json.qj5 00:06:08.924 + echo '=== End of file: /tmp/spdk_tgt_config.json.qj5 ===' 00:06:08.924 + echo '' 00:06:08.924 + rm /tmp/62.Jpw /tmp/spdk_tgt_config.json.qj5 00:06:08.924 + exit 1 00:06:08.924 INFO: configuration change detected. 00:06:08.924 07:15:31 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:08.924 07:15:31 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:08.924 07:15:31 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:08.924 07:15:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.924 07:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:08.924 07:15:31 -- json_config/json_config.sh@360 -- # local ret=0 00:06:08.924 07:15:31 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:08.924 07:15:31 -- json_config/json_config.sh@370 -- # [[ -n 66327 ]] 00:06:08.924 07:15:31 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:08.924 07:15:31 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:08.924 07:15:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.924 07:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:08.924 07:15:31 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:08.924 07:15:31 -- json_config/json_config.sh@246 -- # uname -s 00:06:08.924 07:15:31 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:08.924 07:15:31 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:08.924 07:15:31 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:08.924 07:15:31 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:08.924 07:15:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.924 07:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:08.924 07:15:31 -- json_config/json_config.sh@376 -- # killprocess 66327 00:06:08.924 07:15:31 -- common/autotest_common.sh@936 -- # '[' -z 66327 ']' 00:06:08.924 07:15:31 -- common/autotest_common.sh@940 -- # kill -0 66327 00:06:08.924 07:15:31 -- common/autotest_common.sh@941 -- # uname 00:06:08.924 07:15:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.924 07:15:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66327 00:06:09.184 killing process with pid 66327 00:06:09.184 07:15:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.184 07:15:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.184 07:15:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66327' 00:06:09.184 07:15:31 -- common/autotest_common.sh@955 -- # kill 66327 00:06:09.184 07:15:31 -- common/autotest_common.sh@960 -- # wait 66327 00:06:09.184 07:15:31 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:09.184 07:15:31 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:09.184 07:15:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.184 07:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:09.444 07:15:31 -- json_config/json_config.sh@381 -- # return 0 00:06:09.444 07:15:31 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:09.444 INFO: Success 00:06:09.444 ************************************ 00:06:09.444 END TEST json_config 00:06:09.444 ************************************ 00:06:09.444 00:06:09.444 real 0m9.100s 00:06:09.444 user 0m13.369s 00:06:09.444 sys 0m1.900s 00:06:09.444 07:15:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:09.444 07:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:09.444 07:15:31 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:09.444 07:15:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.444 07:15:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.444 07:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:09.444 ************************************ 00:06:09.444 START TEST json_config_extra_key 00:06:09.444 ************************************ 00:06:09.444 07:15:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:09.444 07:15:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:09.444 07:15:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:09.444 07:15:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:09.444 07:15:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:09.444 07:15:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:09.444 07:15:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:09.444 07:15:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:09.444 07:15:31 -- scripts/common.sh@335 -- # IFS=.-: 00:06:09.444 07:15:31 -- scripts/common.sh@335 -- # read -ra ver1 00:06:09.444 07:15:31 -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.445 07:15:31 -- scripts/common.sh@336 -- # read -ra ver2 00:06:09.445 07:15:31 -- scripts/common.sh@337 -- # local 'op=<' 00:06:09.445 07:15:31 -- scripts/common.sh@339 -- # ver1_l=2 00:06:09.445 07:15:31 -- scripts/common.sh@340 -- # ver2_l=1 00:06:09.445 07:15:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:09.445 07:15:31 -- scripts/common.sh@343 -- # case "$op" in 00:06:09.445 07:15:31 -- scripts/common.sh@344 -- # : 1 00:06:09.445 07:15:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:09.445 07:15:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.445 07:15:31 -- scripts/common.sh@364 -- # decimal 1 00:06:09.445 07:15:31 -- scripts/common.sh@352 -- # local d=1 00:06:09.445 07:15:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.445 07:15:31 -- scripts/common.sh@354 -- # echo 1 00:06:09.445 07:15:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:09.445 07:15:31 -- scripts/common.sh@365 -- # decimal 2 00:06:09.445 07:15:31 -- scripts/common.sh@352 -- # local d=2 00:06:09.445 07:15:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.445 07:15:31 -- scripts/common.sh@354 -- # echo 2 00:06:09.445 07:15:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:09.445 07:15:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:09.445 07:15:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:09.445 07:15:31 -- scripts/common.sh@367 -- # return 0 00:06:09.445 07:15:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.445 07:15:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:09.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.445 --rc genhtml_branch_coverage=1 00:06:09.445 --rc genhtml_function_coverage=1 00:06:09.445 --rc genhtml_legend=1 00:06:09.445 --rc geninfo_all_blocks=1 00:06:09.445 --rc geninfo_unexecuted_blocks=1 00:06:09.445 00:06:09.445 ' 00:06:09.445 07:15:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:09.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.445 --rc genhtml_branch_coverage=1 00:06:09.445 --rc genhtml_function_coverage=1 00:06:09.445 --rc genhtml_legend=1 00:06:09.445 --rc geninfo_all_blocks=1 00:06:09.445 --rc geninfo_unexecuted_blocks=1 00:06:09.445 00:06:09.445 ' 00:06:09.445 07:15:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:09.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.445 --rc genhtml_branch_coverage=1 00:06:09.445 --rc genhtml_function_coverage=1 00:06:09.445 --rc genhtml_legend=1 00:06:09.445 --rc geninfo_all_blocks=1 00:06:09.445 --rc geninfo_unexecuted_blocks=1 00:06:09.445 00:06:09.445 ' 00:06:09.445 07:15:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:09.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.445 --rc genhtml_branch_coverage=1 00:06:09.445 --rc genhtml_function_coverage=1 00:06:09.445 --rc genhtml_legend=1 00:06:09.445 --rc geninfo_all_blocks=1 00:06:09.445 --rc geninfo_unexecuted_blocks=1 00:06:09.445 00:06:09.445 ' 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:09.445 07:15:31 -- nvmf/common.sh@7 -- # uname -s 00:06:09.445 07:15:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.445 07:15:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.445 07:15:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.445 07:15:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.445 07:15:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.445 07:15:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.445 07:15:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.445 07:15:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.445 07:15:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.445 07:15:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.445 07:15:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:06:09.445 07:15:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:06:09.445 07:15:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.445 07:15:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.445 07:15:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:09.445 07:15:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:09.445 07:15:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.445 07:15:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.445 07:15:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.445 07:15:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.445 07:15:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.445 07:15:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.445 07:15:31 -- paths/export.sh@5 -- # export PATH 00:06:09.445 07:15:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.445 07:15:31 -- nvmf/common.sh@46 -- # : 0 00:06:09.445 07:15:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:09.445 07:15:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:09.445 07:15:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:09.445 07:15:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.445 07:15:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.445 07:15:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:09.445 07:15:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:09.445 07:15:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:09.445 INFO: launching applications... 00:06:09.445 Waiting for target to run... 00:06:09.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66480 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66480 /var/tmp/spdk_tgt.sock 00:06:09.445 07:15:31 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:09.445 07:15:31 -- common/autotest_common.sh@829 -- # '[' -z 66480 ']' 00:06:09.445 07:15:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:09.445 07:15:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.445 07:15:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:09.445 07:15:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.445 07:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:09.705 [2024-11-28 07:15:31.760399] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:09.705 [2024-11-28 07:15:31.760688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66480 ] 00:06:09.964 [2024-11-28 07:15:32.190075] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.223 [2024-11-28 07:15:32.267211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:10.223 [2024-11-28 07:15:32.267726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.790 07:15:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.790 07:15:32 -- common/autotest_common.sh@862 -- # return 0 00:06:10.790 07:15:32 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:10.790 00:06:10.790 INFO: shutting down applications... 00:06:10.790 07:15:32 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:10.790 07:15:32 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:10.790 07:15:32 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:10.790 07:15:32 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:10.790 07:15:32 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66480 ]] 00:06:10.790 07:15:32 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66480 00:06:10.790 07:15:32 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:10.790 07:15:32 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:10.790 07:15:32 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66480 00:06:10.790 07:15:32 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:11.048 07:15:33 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:11.048 07:15:33 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:11.048 07:15:33 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66480 00:06:11.048 07:15:33 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:11.048 07:15:33 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:11.048 07:15:33 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:11.048 07:15:33 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:11.048 SPDK target shutdown done 00:06:11.048 Success 00:06:11.048 07:15:33 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:11.048 00:06:11.048 real 0m1.763s 00:06:11.048 user 0m1.653s 00:06:11.048 sys 0m0.453s 00:06:11.048 07:15:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.048 ************************************ 00:06:11.048 END TEST json_config_extra_key 00:06:11.048 ************************************ 00:06:11.048 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:11.048 07:15:33 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.048 07:15:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.048 07:15:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.048 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:11.048 ************************************ 00:06:11.048 START TEST alias_rpc 00:06:11.048 ************************************ 00:06:11.307 07:15:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.307 * Looking for test storage... 00:06:11.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:11.307 07:15:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:11.307 07:15:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:11.307 07:15:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:11.307 07:15:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:11.307 07:15:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:11.307 07:15:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:11.307 07:15:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:11.307 07:15:33 -- scripts/common.sh@335 -- # IFS=.-: 00:06:11.307 07:15:33 -- scripts/common.sh@335 -- # read -ra ver1 00:06:11.307 07:15:33 -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.307 07:15:33 -- scripts/common.sh@336 -- # read -ra ver2 00:06:11.307 07:15:33 -- scripts/common.sh@337 -- # local 'op=<' 00:06:11.307 07:15:33 -- scripts/common.sh@339 -- # ver1_l=2 00:06:11.307 07:15:33 -- scripts/common.sh@340 -- # ver2_l=1 00:06:11.307 07:15:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:11.307 07:15:33 -- scripts/common.sh@343 -- # case "$op" in 00:06:11.307 07:15:33 -- scripts/common.sh@344 -- # : 1 00:06:11.307 07:15:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:11.307 07:15:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.307 07:15:33 -- scripts/common.sh@364 -- # decimal 1 00:06:11.307 07:15:33 -- scripts/common.sh@352 -- # local d=1 00:06:11.307 07:15:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.307 07:15:33 -- scripts/common.sh@354 -- # echo 1 00:06:11.307 07:15:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:11.307 07:15:33 -- scripts/common.sh@365 -- # decimal 2 00:06:11.307 07:15:33 -- scripts/common.sh@352 -- # local d=2 00:06:11.307 07:15:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.307 07:15:33 -- scripts/common.sh@354 -- # echo 2 00:06:11.307 07:15:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:11.307 07:15:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:11.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.307 07:15:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:11.307 07:15:33 -- scripts/common.sh@367 -- # return 0 00:06:11.307 07:15:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.307 07:15:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:11.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.307 --rc genhtml_branch_coverage=1 00:06:11.307 --rc genhtml_function_coverage=1 00:06:11.307 --rc genhtml_legend=1 00:06:11.307 --rc geninfo_all_blocks=1 00:06:11.307 --rc geninfo_unexecuted_blocks=1 00:06:11.307 00:06:11.307 ' 00:06:11.307 07:15:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:11.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.307 --rc genhtml_branch_coverage=1 00:06:11.307 --rc genhtml_function_coverage=1 00:06:11.307 --rc genhtml_legend=1 00:06:11.307 --rc geninfo_all_blocks=1 00:06:11.307 --rc geninfo_unexecuted_blocks=1 00:06:11.307 00:06:11.307 ' 00:06:11.307 07:15:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:11.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.307 --rc genhtml_branch_coverage=1 00:06:11.307 --rc genhtml_function_coverage=1 00:06:11.307 --rc genhtml_legend=1 00:06:11.307 --rc geninfo_all_blocks=1 00:06:11.307 --rc geninfo_unexecuted_blocks=1 00:06:11.307 00:06:11.307 ' 00:06:11.307 07:15:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:11.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.307 --rc genhtml_branch_coverage=1 00:06:11.307 --rc genhtml_function_coverage=1 00:06:11.307 --rc genhtml_legend=1 00:06:11.307 --rc geninfo_all_blocks=1 00:06:11.307 --rc geninfo_unexecuted_blocks=1 00:06:11.307 00:06:11.307 ' 00:06:11.307 07:15:33 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.307 07:15:33 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66557 00:06:11.307 07:15:33 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:11.307 07:15:33 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66557 00:06:11.307 07:15:33 -- common/autotest_common.sh@829 -- # '[' -z 66557 ']' 00:06:11.307 07:15:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.307 07:15:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.307 07:15:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.307 07:15:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.307 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:11.307 [2024-11-28 07:15:33.536708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.307 [2024-11-28 07:15:33.537022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66557 ] 00:06:11.566 [2024-11-28 07:15:33.671922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.566 [2024-11-28 07:15:33.760768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:11.566 [2024-11-28 07:15:33.761247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.538 07:15:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.538 07:15:34 -- common/autotest_common.sh@862 -- # return 0 00:06:12.538 07:15:34 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:12.797 07:15:34 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66557 00:06:12.797 07:15:34 -- common/autotest_common.sh@936 -- # '[' -z 66557 ']' 00:06:12.797 07:15:34 -- common/autotest_common.sh@940 -- # kill -0 66557 00:06:12.797 07:15:34 -- common/autotest_common.sh@941 -- # uname 00:06:12.797 07:15:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.797 07:15:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66557 00:06:12.797 killing process with pid 66557 00:06:12.797 07:15:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.797 07:15:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.797 07:15:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66557' 00:06:12.797 07:15:34 -- common/autotest_common.sh@955 -- # kill 66557 00:06:12.797 07:15:34 -- common/autotest_common.sh@960 -- # wait 66557 00:06:13.063 ************************************ 00:06:13.063 END TEST alias_rpc 00:06:13.063 ************************************ 00:06:13.063 00:06:13.063 real 0m1.947s 00:06:13.063 user 0m2.238s 00:06:13.063 sys 0m0.444s 00:06:13.063 07:15:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.063 07:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:13.063 07:15:35 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:06:13.063 07:15:35 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:13.063 07:15:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:13.063 07:15:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.063 07:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:13.063 ************************************ 00:06:13.063 START TEST spdkcli_tcp 00:06:13.063 ************************************ 00:06:13.063 07:15:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:13.323 * Looking for test storage... 00:06:13.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:13.323 07:15:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:13.323 07:15:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:13.323 07:15:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:13.323 07:15:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:13.323 07:15:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:13.323 07:15:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:13.323 07:15:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:13.323 07:15:35 -- scripts/common.sh@335 -- # IFS=.-: 00:06:13.323 07:15:35 -- scripts/common.sh@335 -- # read -ra ver1 00:06:13.323 07:15:35 -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.323 07:15:35 -- scripts/common.sh@336 -- # read -ra ver2 00:06:13.323 07:15:35 -- scripts/common.sh@337 -- # local 'op=<' 00:06:13.323 07:15:35 -- scripts/common.sh@339 -- # ver1_l=2 00:06:13.323 07:15:35 -- scripts/common.sh@340 -- # ver2_l=1 00:06:13.323 07:15:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:13.323 07:15:35 -- scripts/common.sh@343 -- # case "$op" in 00:06:13.323 07:15:35 -- scripts/common.sh@344 -- # : 1 00:06:13.323 07:15:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:13.323 07:15:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.323 07:15:35 -- scripts/common.sh@364 -- # decimal 1 00:06:13.323 07:15:35 -- scripts/common.sh@352 -- # local d=1 00:06:13.323 07:15:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.323 07:15:35 -- scripts/common.sh@354 -- # echo 1 00:06:13.323 07:15:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:13.323 07:15:35 -- scripts/common.sh@365 -- # decimal 2 00:06:13.323 07:15:35 -- scripts/common.sh@352 -- # local d=2 00:06:13.323 07:15:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.323 07:15:35 -- scripts/common.sh@354 -- # echo 2 00:06:13.323 07:15:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:13.323 07:15:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:13.323 07:15:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:13.323 07:15:35 -- scripts/common.sh@367 -- # return 0 00:06:13.323 07:15:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.323 07:15:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:13.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.323 --rc genhtml_branch_coverage=1 00:06:13.323 --rc genhtml_function_coverage=1 00:06:13.323 --rc genhtml_legend=1 00:06:13.323 --rc geninfo_all_blocks=1 00:06:13.323 --rc geninfo_unexecuted_blocks=1 00:06:13.323 00:06:13.323 ' 00:06:13.323 07:15:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:13.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.323 --rc genhtml_branch_coverage=1 00:06:13.323 --rc genhtml_function_coverage=1 00:06:13.323 --rc genhtml_legend=1 00:06:13.323 --rc geninfo_all_blocks=1 00:06:13.323 --rc geninfo_unexecuted_blocks=1 00:06:13.323 00:06:13.323 ' 00:06:13.323 07:15:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:13.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.323 --rc genhtml_branch_coverage=1 00:06:13.323 --rc genhtml_function_coverage=1 00:06:13.323 --rc genhtml_legend=1 00:06:13.323 --rc geninfo_all_blocks=1 00:06:13.323 --rc geninfo_unexecuted_blocks=1 00:06:13.323 00:06:13.323 ' 00:06:13.323 07:15:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:13.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.323 --rc genhtml_branch_coverage=1 00:06:13.323 --rc genhtml_function_coverage=1 00:06:13.323 --rc genhtml_legend=1 00:06:13.323 --rc geninfo_all_blocks=1 00:06:13.323 --rc geninfo_unexecuted_blocks=1 00:06:13.323 00:06:13.323 ' 00:06:13.323 07:15:35 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:13.323 07:15:35 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:13.323 07:15:35 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:13.323 07:15:35 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:13.323 07:15:35 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:13.323 07:15:35 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:13.323 07:15:35 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:13.323 07:15:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.323 07:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:13.323 07:15:35 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66640 00:06:13.323 07:15:35 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:13.323 07:15:35 -- spdkcli/tcp.sh@27 -- # waitforlisten 66640 00:06:13.323 07:15:35 -- common/autotest_common.sh@829 -- # '[' -z 66640 ']' 00:06:13.323 07:15:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.323 07:15:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.323 07:15:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.323 07:15:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.323 07:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:13.323 [2024-11-28 07:15:35.578146] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.323 [2024-11-28 07:15:35.578458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66640 ] 00:06:13.582 [2024-11-28 07:15:35.711440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.582 [2024-11-28 07:15:35.798585] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:13.582 [2024-11-28 07:15:35.799092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.582 [2024-11-28 07:15:35.799105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.520 07:15:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.520 07:15:36 -- common/autotest_common.sh@862 -- # return 0 00:06:14.520 07:15:36 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:14.520 07:15:36 -- spdkcli/tcp.sh@31 -- # socat_pid=66657 00:06:14.520 07:15:36 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:14.780 [ 00:06:14.780 "bdev_malloc_delete", 00:06:14.780 "bdev_malloc_create", 00:06:14.780 "bdev_null_resize", 00:06:14.780 "bdev_null_delete", 00:06:14.780 "bdev_null_create", 00:06:14.780 "bdev_nvme_cuse_unregister", 00:06:14.780 "bdev_nvme_cuse_register", 00:06:14.780 "bdev_opal_new_user", 00:06:14.780 "bdev_opal_set_lock_state", 00:06:14.780 "bdev_opal_delete", 00:06:14.780 "bdev_opal_get_info", 00:06:14.780 "bdev_opal_create", 00:06:14.780 "bdev_nvme_opal_revert", 00:06:14.780 "bdev_nvme_opal_init", 00:06:14.780 "bdev_nvme_send_cmd", 00:06:14.780 "bdev_nvme_get_path_iostat", 00:06:14.780 "bdev_nvme_get_mdns_discovery_info", 00:06:14.780 "bdev_nvme_stop_mdns_discovery", 00:06:14.780 "bdev_nvme_start_mdns_discovery", 00:06:14.780 "bdev_nvme_set_multipath_policy", 00:06:14.780 "bdev_nvme_set_preferred_path", 00:06:14.780 "bdev_nvme_get_io_paths", 00:06:14.780 "bdev_nvme_remove_error_injection", 00:06:14.780 "bdev_nvme_add_error_injection", 00:06:14.780 "bdev_nvme_get_discovery_info", 00:06:14.780 "bdev_nvme_stop_discovery", 00:06:14.780 "bdev_nvme_start_discovery", 00:06:14.780 "bdev_nvme_get_controller_health_info", 00:06:14.780 "bdev_nvme_disable_controller", 00:06:14.780 "bdev_nvme_enable_controller", 00:06:14.780 "bdev_nvme_reset_controller", 00:06:14.780 "bdev_nvme_get_transport_statistics", 00:06:14.780 "bdev_nvme_apply_firmware", 00:06:14.780 "bdev_nvme_detach_controller", 00:06:14.780 "bdev_nvme_get_controllers", 00:06:14.780 "bdev_nvme_attach_controller", 00:06:14.780 "bdev_nvme_set_hotplug", 00:06:14.780 "bdev_nvme_set_options", 00:06:14.780 "bdev_passthru_delete", 00:06:14.780 "bdev_passthru_create", 00:06:14.780 "bdev_lvol_grow_lvstore", 00:06:14.780 "bdev_lvol_get_lvols", 00:06:14.780 "bdev_lvol_get_lvstores", 00:06:14.780 "bdev_lvol_delete", 00:06:14.780 "bdev_lvol_set_read_only", 00:06:14.780 "bdev_lvol_resize", 00:06:14.780 "bdev_lvol_decouple_parent", 00:06:14.780 "bdev_lvol_inflate", 00:06:14.780 "bdev_lvol_rename", 00:06:14.780 "bdev_lvol_clone_bdev", 00:06:14.780 "bdev_lvol_clone", 00:06:14.780 "bdev_lvol_snapshot", 00:06:14.780 "bdev_lvol_create", 00:06:14.780 "bdev_lvol_delete_lvstore", 00:06:14.780 "bdev_lvol_rename_lvstore", 00:06:14.780 "bdev_lvol_create_lvstore", 00:06:14.780 "bdev_raid_set_options", 00:06:14.780 "bdev_raid_remove_base_bdev", 00:06:14.780 "bdev_raid_add_base_bdev", 00:06:14.780 "bdev_raid_delete", 00:06:14.780 "bdev_raid_create", 00:06:14.780 "bdev_raid_get_bdevs", 00:06:14.780 "bdev_error_inject_error", 00:06:14.780 "bdev_error_delete", 00:06:14.780 "bdev_error_create", 00:06:14.780 "bdev_split_delete", 00:06:14.780 "bdev_split_create", 00:06:14.780 "bdev_delay_delete", 00:06:14.780 "bdev_delay_create", 00:06:14.780 "bdev_delay_update_latency", 00:06:14.780 "bdev_zone_block_delete", 00:06:14.780 "bdev_zone_block_create", 00:06:14.780 "blobfs_create", 00:06:14.780 "blobfs_detect", 00:06:14.780 "blobfs_set_cache_size", 00:06:14.780 "bdev_aio_delete", 00:06:14.780 "bdev_aio_rescan", 00:06:14.780 "bdev_aio_create", 00:06:14.780 "bdev_ftl_set_property", 00:06:14.780 "bdev_ftl_get_properties", 00:06:14.780 "bdev_ftl_get_stats", 00:06:14.780 "bdev_ftl_unmap", 00:06:14.780 "bdev_ftl_unload", 00:06:14.780 "bdev_ftl_delete", 00:06:14.780 "bdev_ftl_load", 00:06:14.780 "bdev_ftl_create", 00:06:14.780 "bdev_virtio_attach_controller", 00:06:14.780 "bdev_virtio_scsi_get_devices", 00:06:14.780 "bdev_virtio_detach_controller", 00:06:14.780 "bdev_virtio_blk_set_hotplug", 00:06:14.780 "bdev_iscsi_delete", 00:06:14.780 "bdev_iscsi_create", 00:06:14.780 "bdev_iscsi_set_options", 00:06:14.780 "bdev_uring_delete", 00:06:14.780 "bdev_uring_create", 00:06:14.780 "accel_error_inject_error", 00:06:14.780 "ioat_scan_accel_module", 00:06:14.780 "dsa_scan_accel_module", 00:06:14.780 "iaa_scan_accel_module", 00:06:14.780 "iscsi_set_options", 00:06:14.780 "iscsi_get_auth_groups", 00:06:14.780 "iscsi_auth_group_remove_secret", 00:06:14.780 "iscsi_auth_group_add_secret", 00:06:14.780 "iscsi_delete_auth_group", 00:06:14.780 "iscsi_create_auth_group", 00:06:14.780 "iscsi_set_discovery_auth", 00:06:14.780 "iscsi_get_options", 00:06:14.780 "iscsi_target_node_request_logout", 00:06:14.780 "iscsi_target_node_set_redirect", 00:06:14.780 "iscsi_target_node_set_auth", 00:06:14.780 "iscsi_target_node_add_lun", 00:06:14.780 "iscsi_get_connections", 00:06:14.780 "iscsi_portal_group_set_auth", 00:06:14.780 "iscsi_start_portal_group", 00:06:14.780 "iscsi_delete_portal_group", 00:06:14.780 "iscsi_create_portal_group", 00:06:14.780 "iscsi_get_portal_groups", 00:06:14.780 "iscsi_delete_target_node", 00:06:14.780 "iscsi_target_node_remove_pg_ig_maps", 00:06:14.780 "iscsi_target_node_add_pg_ig_maps", 00:06:14.780 "iscsi_create_target_node", 00:06:14.780 "iscsi_get_target_nodes", 00:06:14.780 "iscsi_delete_initiator_group", 00:06:14.780 "iscsi_initiator_group_remove_initiators", 00:06:14.780 "iscsi_initiator_group_add_initiators", 00:06:14.780 "iscsi_create_initiator_group", 00:06:14.780 "iscsi_get_initiator_groups", 00:06:14.780 "nvmf_set_crdt", 00:06:14.780 "nvmf_set_config", 00:06:14.780 "nvmf_set_max_subsystems", 00:06:14.780 "nvmf_subsystem_get_listeners", 00:06:14.780 "nvmf_subsystem_get_qpairs", 00:06:14.780 "nvmf_subsystem_get_controllers", 00:06:14.780 "nvmf_get_stats", 00:06:14.780 "nvmf_get_transports", 00:06:14.780 "nvmf_create_transport", 00:06:14.780 "nvmf_get_targets", 00:06:14.780 "nvmf_delete_target", 00:06:14.780 "nvmf_create_target", 00:06:14.780 "nvmf_subsystem_allow_any_host", 00:06:14.780 "nvmf_subsystem_remove_host", 00:06:14.780 "nvmf_subsystem_add_host", 00:06:14.780 "nvmf_subsystem_remove_ns", 00:06:14.780 "nvmf_subsystem_add_ns", 00:06:14.780 "nvmf_subsystem_listener_set_ana_state", 00:06:14.780 "nvmf_discovery_get_referrals", 00:06:14.780 "nvmf_discovery_remove_referral", 00:06:14.780 "nvmf_discovery_add_referral", 00:06:14.780 "nvmf_subsystem_remove_listener", 00:06:14.780 "nvmf_subsystem_add_listener", 00:06:14.780 "nvmf_delete_subsystem", 00:06:14.780 "nvmf_create_subsystem", 00:06:14.780 "nvmf_get_subsystems", 00:06:14.780 "env_dpdk_get_mem_stats", 00:06:14.780 "nbd_get_disks", 00:06:14.780 "nbd_stop_disk", 00:06:14.780 "nbd_start_disk", 00:06:14.780 "ublk_recover_disk", 00:06:14.780 "ublk_get_disks", 00:06:14.780 "ublk_stop_disk", 00:06:14.780 "ublk_start_disk", 00:06:14.780 "ublk_destroy_target", 00:06:14.780 "ublk_create_target", 00:06:14.780 "virtio_blk_create_transport", 00:06:14.780 "virtio_blk_get_transports", 00:06:14.780 "vhost_controller_set_coalescing", 00:06:14.780 "vhost_get_controllers", 00:06:14.781 "vhost_delete_controller", 00:06:14.781 "vhost_create_blk_controller", 00:06:14.781 "vhost_scsi_controller_remove_target", 00:06:14.781 "vhost_scsi_controller_add_target", 00:06:14.781 "vhost_start_scsi_controller", 00:06:14.781 "vhost_create_scsi_controller", 00:06:14.781 "thread_set_cpumask", 00:06:14.781 "framework_get_scheduler", 00:06:14.781 "framework_set_scheduler", 00:06:14.781 "framework_get_reactors", 00:06:14.781 "thread_get_io_channels", 00:06:14.781 "thread_get_pollers", 00:06:14.781 "thread_get_stats", 00:06:14.781 "framework_monitor_context_switch", 00:06:14.781 "spdk_kill_instance", 00:06:14.781 "log_enable_timestamps", 00:06:14.781 "log_get_flags", 00:06:14.781 "log_clear_flag", 00:06:14.781 "log_set_flag", 00:06:14.781 "log_get_level", 00:06:14.781 "log_set_level", 00:06:14.781 "log_get_print_level", 00:06:14.781 "log_set_print_level", 00:06:14.781 "framework_enable_cpumask_locks", 00:06:14.781 "framework_disable_cpumask_locks", 00:06:14.781 "framework_wait_init", 00:06:14.781 "framework_start_init", 00:06:14.781 "scsi_get_devices", 00:06:14.781 "bdev_get_histogram", 00:06:14.781 "bdev_enable_histogram", 00:06:14.781 "bdev_set_qos_limit", 00:06:14.781 "bdev_set_qd_sampling_period", 00:06:14.781 "bdev_get_bdevs", 00:06:14.781 "bdev_reset_iostat", 00:06:14.781 "bdev_get_iostat", 00:06:14.781 "bdev_examine", 00:06:14.781 "bdev_wait_for_examine", 00:06:14.781 "bdev_set_options", 00:06:14.781 "notify_get_notifications", 00:06:14.781 "notify_get_types", 00:06:14.781 "accel_get_stats", 00:06:14.781 "accel_set_options", 00:06:14.781 "accel_set_driver", 00:06:14.781 "accel_crypto_key_destroy", 00:06:14.781 "accel_crypto_keys_get", 00:06:14.781 "accel_crypto_key_create", 00:06:14.781 "accel_assign_opc", 00:06:14.781 "accel_get_module_info", 00:06:14.781 "accel_get_opc_assignments", 00:06:14.781 "vmd_rescan", 00:06:14.781 "vmd_remove_device", 00:06:14.781 "vmd_enable", 00:06:14.781 "sock_set_default_impl", 00:06:14.781 "sock_impl_set_options", 00:06:14.781 "sock_impl_get_options", 00:06:14.781 "iobuf_get_stats", 00:06:14.781 "iobuf_set_options", 00:06:14.781 "framework_get_pci_devices", 00:06:14.781 "framework_get_config", 00:06:14.781 "framework_get_subsystems", 00:06:14.781 "trace_get_info", 00:06:14.781 "trace_get_tpoint_group_mask", 00:06:14.781 "trace_disable_tpoint_group", 00:06:14.781 "trace_enable_tpoint_group", 00:06:14.781 "trace_clear_tpoint_mask", 00:06:14.781 "trace_set_tpoint_mask", 00:06:14.781 "spdk_get_version", 00:06:14.781 "rpc_get_methods" 00:06:14.781 ] 00:06:14.781 07:15:36 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:14.781 07:15:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.781 07:15:36 -- common/autotest_common.sh@10 -- # set +x 00:06:14.781 07:15:36 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:14.781 07:15:36 -- spdkcli/tcp.sh@38 -- # killprocess 66640 00:06:14.781 07:15:36 -- common/autotest_common.sh@936 -- # '[' -z 66640 ']' 00:06:14.781 07:15:36 -- common/autotest_common.sh@940 -- # kill -0 66640 00:06:14.781 07:15:36 -- common/autotest_common.sh@941 -- # uname 00:06:14.781 07:15:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.781 07:15:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66640 00:06:14.781 killing process with pid 66640 00:06:14.781 07:15:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.781 07:15:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.781 07:15:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66640' 00:06:14.781 07:15:36 -- common/autotest_common.sh@955 -- # kill 66640 00:06:14.781 07:15:36 -- common/autotest_common.sh@960 -- # wait 66640 00:06:15.349 ************************************ 00:06:15.349 END TEST spdkcli_tcp 00:06:15.349 ************************************ 00:06:15.349 00:06:15.349 real 0m2.017s 00:06:15.349 user 0m3.766s 00:06:15.349 sys 0m0.508s 00:06:15.349 07:15:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.349 07:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.349 07:15:37 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.349 07:15:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.349 07:15:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.349 07:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.349 ************************************ 00:06:15.349 START TEST dpdk_mem_utility 00:06:15.349 ************************************ 00:06:15.349 07:15:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.349 * Looking for test storage... 00:06:15.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:15.349 07:15:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:15.349 07:15:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:15.349 07:15:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:15.349 07:15:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:15.349 07:15:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:15.349 07:15:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:15.349 07:15:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:15.349 07:15:37 -- scripts/common.sh@335 -- # IFS=.-: 00:06:15.350 07:15:37 -- scripts/common.sh@335 -- # read -ra ver1 00:06:15.350 07:15:37 -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.350 07:15:37 -- scripts/common.sh@336 -- # read -ra ver2 00:06:15.350 07:15:37 -- scripts/common.sh@337 -- # local 'op=<' 00:06:15.350 07:15:37 -- scripts/common.sh@339 -- # ver1_l=2 00:06:15.350 07:15:37 -- scripts/common.sh@340 -- # ver2_l=1 00:06:15.350 07:15:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:15.350 07:15:37 -- scripts/common.sh@343 -- # case "$op" in 00:06:15.350 07:15:37 -- scripts/common.sh@344 -- # : 1 00:06:15.350 07:15:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:15.350 07:15:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.350 07:15:37 -- scripts/common.sh@364 -- # decimal 1 00:06:15.350 07:15:37 -- scripts/common.sh@352 -- # local d=1 00:06:15.350 07:15:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.350 07:15:37 -- scripts/common.sh@354 -- # echo 1 00:06:15.350 07:15:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:15.350 07:15:37 -- scripts/common.sh@365 -- # decimal 2 00:06:15.350 07:15:37 -- scripts/common.sh@352 -- # local d=2 00:06:15.350 07:15:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.350 07:15:37 -- scripts/common.sh@354 -- # echo 2 00:06:15.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.350 07:15:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:15.350 07:15:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:15.350 07:15:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:15.350 07:15:37 -- scripts/common.sh@367 -- # return 0 00:06:15.350 07:15:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.350 07:15:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.350 --rc genhtml_branch_coverage=1 00:06:15.350 --rc genhtml_function_coverage=1 00:06:15.350 --rc genhtml_legend=1 00:06:15.350 --rc geninfo_all_blocks=1 00:06:15.350 --rc geninfo_unexecuted_blocks=1 00:06:15.350 00:06:15.350 ' 00:06:15.350 07:15:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.350 --rc genhtml_branch_coverage=1 00:06:15.350 --rc genhtml_function_coverage=1 00:06:15.350 --rc genhtml_legend=1 00:06:15.350 --rc geninfo_all_blocks=1 00:06:15.350 --rc geninfo_unexecuted_blocks=1 00:06:15.350 00:06:15.350 ' 00:06:15.350 07:15:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.350 --rc genhtml_branch_coverage=1 00:06:15.350 --rc genhtml_function_coverage=1 00:06:15.350 --rc genhtml_legend=1 00:06:15.350 --rc geninfo_all_blocks=1 00:06:15.350 --rc geninfo_unexecuted_blocks=1 00:06:15.350 00:06:15.350 ' 00:06:15.350 07:15:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.350 --rc genhtml_branch_coverage=1 00:06:15.350 --rc genhtml_function_coverage=1 00:06:15.350 --rc genhtml_legend=1 00:06:15.350 --rc geninfo_all_blocks=1 00:06:15.350 --rc geninfo_unexecuted_blocks=1 00:06:15.350 00:06:15.350 ' 00:06:15.350 07:15:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:15.350 07:15:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66738 00:06:15.350 07:15:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.350 07:15:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66738 00:06:15.350 07:15:37 -- common/autotest_common.sh@829 -- # '[' -z 66738 ']' 00:06:15.350 07:15:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.350 07:15:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.350 07:15:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.350 07:15:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.350 07:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.610 [2024-11-28 07:15:37.629286] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.610 [2024-11-28 07:15:37.629619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66738 ] 00:06:15.610 [2024-11-28 07:15:37.768867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.610 [2024-11-28 07:15:37.844112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.610 [2024-11-28 07:15:37.844598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.566 07:15:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.566 07:15:38 -- common/autotest_common.sh@862 -- # return 0 00:06:16.566 07:15:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:16.566 07:15:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:16.566 07:15:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.566 07:15:38 -- common/autotest_common.sh@10 -- # set +x 00:06:16.566 { 00:06:16.566 "filename": "/tmp/spdk_mem_dump.txt" 00:06:16.566 } 00:06:16.566 07:15:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.566 07:15:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:16.566 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:16.566 1 heaps totaling size 814.000000 MiB 00:06:16.566 size: 814.000000 MiB heap id: 0 00:06:16.566 end heaps---------- 00:06:16.566 8 mempools totaling size 598.116089 MiB 00:06:16.566 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:16.566 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:16.566 size: 84.521057 MiB name: bdev_io_66738 00:06:16.566 size: 51.011292 MiB name: evtpool_66738 00:06:16.566 size: 50.003479 MiB name: msgpool_66738 00:06:16.566 size: 21.763794 MiB name: PDU_Pool 00:06:16.566 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:16.566 size: 0.026123 MiB name: Session_Pool 00:06:16.566 end mempools------- 00:06:16.566 6 memzones totaling size 4.142822 MiB 00:06:16.566 size: 1.000366 MiB name: RG_ring_0_66738 00:06:16.566 size: 1.000366 MiB name: RG_ring_1_66738 00:06:16.566 size: 1.000366 MiB name: RG_ring_4_66738 00:06:16.566 size: 1.000366 MiB name: RG_ring_5_66738 00:06:16.566 size: 0.125366 MiB name: RG_ring_2_66738 00:06:16.566 size: 0.015991 MiB name: RG_ring_3_66738 00:06:16.566 end memzones------- 00:06:16.566 07:15:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:16.566 heap id: 0 total size: 814.000000 MiB number of busy elements: 305 number of free elements: 15 00:06:16.566 list of free elements. size: 12.471008 MiB 00:06:16.566 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:16.566 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:16.566 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:16.566 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:16.566 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:16.566 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:16.566 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:16.566 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:16.566 element at address: 0x200000200000 with size: 0.832825 MiB 00:06:16.566 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:06:16.566 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:16.566 element at address: 0x200000800000 with size: 0.486328 MiB 00:06:16.566 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:16.566 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:16.566 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:16.566 list of standard malloc elements. size: 199.266418 MiB 00:06:16.566 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:16.566 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:16.566 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:16.566 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:16.566 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:16.566 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:16.566 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:16.566 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:16.566 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:16.566 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:16.566 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:16.566 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:16.566 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:16.566 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:16.566 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:16.566 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:16.566 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:16.566 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:16.566 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:16.566 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:16.567 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:16.567 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:16.568 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:16.568 list of memzone associated elements. size: 602.262573 MiB 00:06:16.568 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:16.568 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:16.568 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:16.568 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:16.568 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:16.568 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66738_0 00:06:16.568 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:16.568 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66738_0 00:06:16.568 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:16.568 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66738_0 00:06:16.568 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:16.568 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:16.568 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:16.568 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:16.568 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:16.568 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66738 00:06:16.568 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:16.568 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66738 00:06:16.568 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:16.568 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66738 00:06:16.568 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:16.568 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:16.568 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:16.568 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:16.568 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:16.568 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:16.568 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:16.568 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:16.568 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:16.568 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66738 00:06:16.568 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:16.568 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66738 00:06:16.568 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:16.568 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66738 00:06:16.568 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:16.568 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66738 00:06:16.568 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:16.568 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66738 00:06:16.568 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:16.568 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:16.568 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:16.568 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:16.568 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:16.568 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:16.568 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:16.568 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66738 00:06:16.568 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:16.568 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:16.568 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:16.568 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:16.568 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:16.568 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66738 00:06:16.568 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:16.568 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:16.568 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:16.568 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66738 00:06:16.568 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:16.568 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66738 00:06:16.568 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:16.568 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:16.568 07:15:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:16.568 07:15:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66738 00:06:16.568 07:15:38 -- common/autotest_common.sh@936 -- # '[' -z 66738 ']' 00:06:16.568 07:15:38 -- common/autotest_common.sh@940 -- # kill -0 66738 00:06:16.568 07:15:38 -- common/autotest_common.sh@941 -- # uname 00:06:16.568 07:15:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.568 07:15:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66738 00:06:16.569 07:15:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.569 07:15:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.569 killing process with pid 66738 00:06:16.569 07:15:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66738' 00:06:16.569 07:15:38 -- common/autotest_common.sh@955 -- # kill 66738 00:06:16.569 07:15:38 -- common/autotest_common.sh@960 -- # wait 66738 00:06:17.137 00:06:17.137 real 0m1.817s 00:06:17.137 user 0m1.961s 00:06:17.137 sys 0m0.465s 00:06:17.137 07:15:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.137 07:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:17.137 ************************************ 00:06:17.137 END TEST dpdk_mem_utility 00:06:17.137 ************************************ 00:06:17.137 07:15:39 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:17.137 07:15:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.137 07:15:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.137 07:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:17.137 ************************************ 00:06:17.137 START TEST event 00:06:17.137 ************************************ 00:06:17.137 07:15:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:17.137 * Looking for test storage... 00:06:17.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:17.137 07:15:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:17.137 07:15:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:17.137 07:15:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:17.396 07:15:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:17.396 07:15:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:17.396 07:15:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:17.396 07:15:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:17.396 07:15:39 -- scripts/common.sh@335 -- # IFS=.-: 00:06:17.396 07:15:39 -- scripts/common.sh@335 -- # read -ra ver1 00:06:17.396 07:15:39 -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.396 07:15:39 -- scripts/common.sh@336 -- # read -ra ver2 00:06:17.396 07:15:39 -- scripts/common.sh@337 -- # local 'op=<' 00:06:17.396 07:15:39 -- scripts/common.sh@339 -- # ver1_l=2 00:06:17.396 07:15:39 -- scripts/common.sh@340 -- # ver2_l=1 00:06:17.396 07:15:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:17.396 07:15:39 -- scripts/common.sh@343 -- # case "$op" in 00:06:17.396 07:15:39 -- scripts/common.sh@344 -- # : 1 00:06:17.396 07:15:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:17.396 07:15:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.396 07:15:39 -- scripts/common.sh@364 -- # decimal 1 00:06:17.396 07:15:39 -- scripts/common.sh@352 -- # local d=1 00:06:17.396 07:15:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.396 07:15:39 -- scripts/common.sh@354 -- # echo 1 00:06:17.396 07:15:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:17.396 07:15:39 -- scripts/common.sh@365 -- # decimal 2 00:06:17.396 07:15:39 -- scripts/common.sh@352 -- # local d=2 00:06:17.396 07:15:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.396 07:15:39 -- scripts/common.sh@354 -- # echo 2 00:06:17.396 07:15:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:17.396 07:15:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:17.396 07:15:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:17.396 07:15:39 -- scripts/common.sh@367 -- # return 0 00:06:17.396 07:15:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.396 07:15:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:17.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.396 --rc genhtml_branch_coverage=1 00:06:17.397 --rc genhtml_function_coverage=1 00:06:17.397 --rc genhtml_legend=1 00:06:17.397 --rc geninfo_all_blocks=1 00:06:17.397 --rc geninfo_unexecuted_blocks=1 00:06:17.397 00:06:17.397 ' 00:06:17.397 07:15:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:17.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.397 --rc genhtml_branch_coverage=1 00:06:17.397 --rc genhtml_function_coverage=1 00:06:17.397 --rc genhtml_legend=1 00:06:17.397 --rc geninfo_all_blocks=1 00:06:17.397 --rc geninfo_unexecuted_blocks=1 00:06:17.397 00:06:17.397 ' 00:06:17.397 07:15:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:17.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.397 --rc genhtml_branch_coverage=1 00:06:17.397 --rc genhtml_function_coverage=1 00:06:17.397 --rc genhtml_legend=1 00:06:17.397 --rc geninfo_all_blocks=1 00:06:17.397 --rc geninfo_unexecuted_blocks=1 00:06:17.397 00:06:17.397 ' 00:06:17.397 07:15:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:17.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.397 --rc genhtml_branch_coverage=1 00:06:17.397 --rc genhtml_function_coverage=1 00:06:17.397 --rc genhtml_legend=1 00:06:17.397 --rc geninfo_all_blocks=1 00:06:17.397 --rc geninfo_unexecuted_blocks=1 00:06:17.397 00:06:17.397 ' 00:06:17.397 07:15:39 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:17.397 07:15:39 -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.397 07:15:39 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.397 07:15:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:17.397 07:15:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.397 07:15:39 -- common/autotest_common.sh@10 -- # set +x 00:06:17.397 ************************************ 00:06:17.397 START TEST event_perf 00:06:17.397 ************************************ 00:06:17.397 07:15:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.397 Running I/O for 1 seconds...[2024-11-28 07:15:39.472462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.397 [2024-11-28 07:15:39.472679] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66811 ] 00:06:17.397 [2024-11-28 07:15:39.606564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.657 [2024-11-28 07:15:39.684703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.657 [2024-11-28 07:15:39.684859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.657 [2024-11-28 07:15:39.685003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.657 [2024-11-28 07:15:39.685005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.594 Running I/O for 1 seconds... 00:06:18.594 lcore 0: 164652 00:06:18.594 lcore 1: 164650 00:06:18.594 lcore 2: 164649 00:06:18.594 lcore 3: 164649 00:06:18.594 done. 00:06:18.594 00:06:18.594 real 0m1.305s 00:06:18.594 ************************************ 00:06:18.594 END TEST event_perf 00:06:18.594 ************************************ 00:06:18.594 user 0m4.123s 00:06:18.594 sys 0m0.059s 00:06:18.594 07:15:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.594 07:15:40 -- common/autotest_common.sh@10 -- # set +x 00:06:18.594 07:15:40 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.594 07:15:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:18.594 07:15:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.594 07:15:40 -- common/autotest_common.sh@10 -- # set +x 00:06:18.594 ************************************ 00:06:18.594 START TEST event_reactor 00:06:18.594 ************************************ 00:06:18.594 07:15:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.594 [2024-11-28 07:15:40.826997] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.594 [2024-11-28 07:15:40.827100] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66855 ] 00:06:18.853 [2024-11-28 07:15:40.959016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.853 [2024-11-28 07:15:41.034976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.230 test_start 00:06:20.230 oneshot 00:06:20.230 tick 100 00:06:20.230 tick 100 00:06:20.230 tick 250 00:06:20.230 tick 100 00:06:20.230 tick 100 00:06:20.230 tick 100 00:06:20.230 tick 250 00:06:20.230 tick 500 00:06:20.230 tick 100 00:06:20.230 tick 100 00:06:20.230 tick 250 00:06:20.230 tick 100 00:06:20.230 tick 100 00:06:20.230 test_end 00:06:20.230 ************************************ 00:06:20.230 END TEST event_reactor 00:06:20.230 ************************************ 00:06:20.230 00:06:20.230 real 0m1.294s 00:06:20.230 user 0m1.131s 00:06:20.230 sys 0m0.056s 00:06:20.230 07:15:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.230 07:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:20.230 07:15:42 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.230 07:15:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:20.230 07:15:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.230 07:15:42 -- common/autotest_common.sh@10 -- # set +x 00:06:20.231 ************************************ 00:06:20.231 START TEST event_reactor_perf 00:06:20.231 ************************************ 00:06:20.231 07:15:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.231 [2024-11-28 07:15:42.176957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.231 [2024-11-28 07:15:42.177352] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66885 ] 00:06:20.231 [2024-11-28 07:15:42.314049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.231 [2024-11-28 07:15:42.371141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.235 test_start 00:06:21.235 test_end 00:06:21.235 Performance: 388252 events per second 00:06:21.235 00:06:21.235 real 0m1.274s 00:06:21.235 user 0m1.118s 00:06:21.235 sys 0m0.049s 00:06:21.236 07:15:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.236 07:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:21.236 ************************************ 00:06:21.236 END TEST event_reactor_perf 00:06:21.236 ************************************ 00:06:21.236 07:15:43 -- event/event.sh@49 -- # uname -s 00:06:21.236 07:15:43 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.236 07:15:43 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.236 07:15:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.236 07:15:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.236 07:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:21.236 ************************************ 00:06:21.236 START TEST event_scheduler 00:06:21.236 ************************************ 00:06:21.236 07:15:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.495 * Looking for test storage... 00:06:21.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:21.495 07:15:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:21.495 07:15:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:21.495 07:15:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:21.495 07:15:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:21.495 07:15:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:21.495 07:15:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:21.495 07:15:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:21.495 07:15:43 -- scripts/common.sh@335 -- # IFS=.-: 00:06:21.495 07:15:43 -- scripts/common.sh@335 -- # read -ra ver1 00:06:21.495 07:15:43 -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.495 07:15:43 -- scripts/common.sh@336 -- # read -ra ver2 00:06:21.495 07:15:43 -- scripts/common.sh@337 -- # local 'op=<' 00:06:21.495 07:15:43 -- scripts/common.sh@339 -- # ver1_l=2 00:06:21.495 07:15:43 -- scripts/common.sh@340 -- # ver2_l=1 00:06:21.495 07:15:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:21.495 07:15:43 -- scripts/common.sh@343 -- # case "$op" in 00:06:21.495 07:15:43 -- scripts/common.sh@344 -- # : 1 00:06:21.495 07:15:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:21.495 07:15:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.495 07:15:43 -- scripts/common.sh@364 -- # decimal 1 00:06:21.495 07:15:43 -- scripts/common.sh@352 -- # local d=1 00:06:21.495 07:15:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.495 07:15:43 -- scripts/common.sh@354 -- # echo 1 00:06:21.495 07:15:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:21.495 07:15:43 -- scripts/common.sh@365 -- # decimal 2 00:06:21.495 07:15:43 -- scripts/common.sh@352 -- # local d=2 00:06:21.495 07:15:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.495 07:15:43 -- scripts/common.sh@354 -- # echo 2 00:06:21.495 07:15:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:21.495 07:15:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:21.495 07:15:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:21.495 07:15:43 -- scripts/common.sh@367 -- # return 0 00:06:21.495 07:15:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.495 07:15:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:21.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.496 --rc genhtml_branch_coverage=1 00:06:21.496 --rc genhtml_function_coverage=1 00:06:21.496 --rc genhtml_legend=1 00:06:21.496 --rc geninfo_all_blocks=1 00:06:21.496 --rc geninfo_unexecuted_blocks=1 00:06:21.496 00:06:21.496 ' 00:06:21.496 07:15:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:21.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.496 --rc genhtml_branch_coverage=1 00:06:21.496 --rc genhtml_function_coverage=1 00:06:21.496 --rc genhtml_legend=1 00:06:21.496 --rc geninfo_all_blocks=1 00:06:21.496 --rc geninfo_unexecuted_blocks=1 00:06:21.496 00:06:21.496 ' 00:06:21.496 07:15:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:21.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.496 --rc genhtml_branch_coverage=1 00:06:21.496 --rc genhtml_function_coverage=1 00:06:21.496 --rc genhtml_legend=1 00:06:21.496 --rc geninfo_all_blocks=1 00:06:21.496 --rc geninfo_unexecuted_blocks=1 00:06:21.496 00:06:21.496 ' 00:06:21.496 07:15:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:21.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.496 --rc genhtml_branch_coverage=1 00:06:21.496 --rc genhtml_function_coverage=1 00:06:21.496 --rc genhtml_legend=1 00:06:21.496 --rc geninfo_all_blocks=1 00:06:21.496 --rc geninfo_unexecuted_blocks=1 00:06:21.496 00:06:21.496 ' 00:06:21.496 07:15:43 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.496 07:15:43 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66954 00:06:21.496 07:15:43 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.496 07:15:43 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.496 07:15:43 -- scheduler/scheduler.sh@37 -- # waitforlisten 66954 00:06:21.496 07:15:43 -- common/autotest_common.sh@829 -- # '[' -z 66954 ']' 00:06:21.496 07:15:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.496 07:15:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.496 07:15:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.496 07:15:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.496 07:15:43 -- common/autotest_common.sh@10 -- # set +x 00:06:21.496 [2024-11-28 07:15:43.730938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.496 [2024-11-28 07:15:43.731296] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66954 ] 00:06:21.755 [2024-11-28 07:15:43.874820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.755 [2024-11-28 07:15:43.992044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.755 [2024-11-28 07:15:43.992217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.755 [2024-11-28 07:15:43.992364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.755 [2024-11-28 07:15:43.992368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.690 07:15:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.690 07:15:44 -- common/autotest_common.sh@862 -- # return 0 00:06:22.690 07:15:44 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:22.690 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.690 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.690 POWER: Env isn't set yet! 00:06:22.690 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:22.690 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.690 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.690 POWER: Attempting to initialise PSTAT power management... 00:06:22.690 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.690 POWER: Cannot set governor of lcore 0 to performance 00:06:22.690 POWER: Attempting to initialise AMD PSTATE power management... 00:06:22.690 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.690 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.690 POWER: Attempting to initialise CPPC power management... 00:06:22.690 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.690 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.690 POWER: Attempting to initialise VM power management... 00:06:22.690 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:22.690 POWER: Unable to set Power Management Environment for lcore 0 00:06:22.690 [2024-11-28 07:15:44.779290] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:22.690 [2024-11-28 07:15:44.779355] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:22.690 [2024-11-28 07:15:44.779365] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:22.690 [2024-11-28 07:15:44.779392] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:22.690 [2024-11-28 07:15:44.779400] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:22.690 [2024-11-28 07:15:44.779408] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:22.690 07:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.690 07:15:44 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:22.690 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.690 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.691 [2024-11-28 07:15:44.903771] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:22.691 07:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.691 07:15:44 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:22.691 07:15:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.691 07:15:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.691 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.691 ************************************ 00:06:22.691 START TEST scheduler_create_thread 00:06:22.691 ************************************ 00:06:22.691 07:15:44 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:22.691 07:15:44 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:22.691 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.691 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.691 2 00:06:22.691 07:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.691 07:15:44 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:22.691 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.691 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.691 3 00:06:22.691 07:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.691 07:15:44 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:22.691 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.691 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.691 4 00:06:22.691 07:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.691 07:15:44 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:22.691 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.691 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.691 5 00:06:22.691 07:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.691 07:15:44 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:22.691 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.691 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.951 6 00:06:22.951 07:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.951 07:15:44 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:22.951 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.951 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.951 7 00:06:22.951 07:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.951 07:15:44 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:22.951 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.951 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.951 8 00:06:22.951 07:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.951 07:15:44 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:22.951 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.951 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.951 9 00:06:22.951 07:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.951 07:15:44 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:22.951 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.951 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.951 10 00:06:22.951 07:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.951 07:15:44 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:22.951 07:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.952 07:15:44 -- common/autotest_common.sh@10 -- # set +x 00:06:22.952 07:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.952 07:15:45 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:22.952 07:15:45 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:22.952 07:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.952 07:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:22.952 07:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.952 07:15:45 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:22.952 07:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.952 07:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:24.330 07:15:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.330 07:15:46 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:24.330 07:15:46 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:24.330 07:15:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.330 07:15:46 -- common/autotest_common.sh@10 -- # set +x 00:06:25.268 ************************************ 00:06:25.268 END TEST scheduler_create_thread 00:06:25.268 ************************************ 00:06:25.268 07:15:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.268 00:06:25.268 real 0m2.616s 00:06:25.268 user 0m0.020s 00:06:25.268 sys 0m0.005s 00:06:25.268 07:15:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.268 07:15:47 -- common/autotest_common.sh@10 -- # set +x 00:06:25.527 07:15:47 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:25.527 07:15:47 -- scheduler/scheduler.sh@46 -- # killprocess 66954 00:06:25.527 07:15:47 -- common/autotest_common.sh@936 -- # '[' -z 66954 ']' 00:06:25.527 07:15:47 -- common/autotest_common.sh@940 -- # kill -0 66954 00:06:25.527 07:15:47 -- common/autotest_common.sh@941 -- # uname 00:06:25.527 07:15:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:25.527 07:15:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66954 00:06:25.527 killing process with pid 66954 00:06:25.527 07:15:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:25.527 07:15:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:25.527 07:15:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66954' 00:06:25.527 07:15:47 -- common/autotest_common.sh@955 -- # kill 66954 00:06:25.527 07:15:47 -- common/autotest_common.sh@960 -- # wait 66954 00:06:25.786 [2024-11-28 07:15:48.012460] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:26.045 00:06:26.045 real 0m4.741s 00:06:26.045 user 0m8.929s 00:06:26.045 sys 0m0.460s 00:06:26.045 ************************************ 00:06:26.045 END TEST event_scheduler 00:06:26.045 ************************************ 00:06:26.045 07:15:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.045 07:15:48 -- common/autotest_common.sh@10 -- # set +x 00:06:26.045 07:15:48 -- event/event.sh@51 -- # modprobe -n nbd 00:06:26.045 07:15:48 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:26.045 07:15:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.045 07:15:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.045 07:15:48 -- common/autotest_common.sh@10 -- # set +x 00:06:26.045 ************************************ 00:06:26.045 START TEST app_repeat 00:06:26.045 ************************************ 00:06:26.045 07:15:48 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:26.045 07:15:48 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.045 07:15:48 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.045 07:15:48 -- event/event.sh@13 -- # local nbd_list 00:06:26.045 07:15:48 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.045 07:15:48 -- event/event.sh@14 -- # local bdev_list 00:06:26.045 07:15:48 -- event/event.sh@15 -- # local repeat_times=4 00:06:26.045 07:15:48 -- event/event.sh@17 -- # modprobe nbd 00:06:26.045 07:15:48 -- event/event.sh@19 -- # repeat_pid=67053 00:06:26.045 07:15:48 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.045 Process app_repeat pid: 67053 00:06:26.045 spdk_app_start Round 0 00:06:26.045 07:15:48 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 67053' 00:06:26.045 07:15:48 -- event/event.sh@23 -- # for i in {0..2} 00:06:26.045 07:15:48 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:26.045 07:15:48 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:26.045 07:15:48 -- event/event.sh@25 -- # waitforlisten 67053 /var/tmp/spdk-nbd.sock 00:06:26.045 07:15:48 -- common/autotest_common.sh@829 -- # '[' -z 67053 ']' 00:06:26.045 07:15:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.045 07:15:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.045 07:15:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.045 07:15:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.045 07:15:48 -- common/autotest_common.sh@10 -- # set +x 00:06:26.304 [2024-11-28 07:15:48.321212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.304 [2024-11-28 07:15:48.321366] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67053 ] 00:06:26.304 [2024-11-28 07:15:48.460490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.304 [2024-11-28 07:15:48.542701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.304 [2024-11-28 07:15:48.542713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.240 07:15:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.240 07:15:49 -- common/autotest_common.sh@862 -- # return 0 00:06:27.240 07:15:49 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.240 Malloc0 00:06:27.499 07:15:49 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.759 Malloc1 00:06:27.759 07:15:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@12 -- # local i 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.759 07:15:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.018 /dev/nbd0 00:06:28.018 07:15:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.018 07:15:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.018 07:15:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:28.018 07:15:50 -- common/autotest_common.sh@867 -- # local i 00:06:28.018 07:15:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.018 07:15:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.018 07:15:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:28.018 07:15:50 -- common/autotest_common.sh@871 -- # break 00:06:28.018 07:15:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.018 07:15:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.018 07:15:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.018 1+0 records in 00:06:28.018 1+0 records out 00:06:28.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327111 s, 12.5 MB/s 00:06:28.018 07:15:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.018 07:15:50 -- common/autotest_common.sh@884 -- # size=4096 00:06:28.018 07:15:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.018 07:15:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.018 07:15:50 -- common/autotest_common.sh@887 -- # return 0 00:06:28.018 07:15:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.018 07:15:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.018 07:15:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.319 /dev/nbd1 00:06:28.319 07:15:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.319 07:15:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.319 07:15:50 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:28.319 07:15:50 -- common/autotest_common.sh@867 -- # local i 00:06:28.319 07:15:50 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:28.319 07:15:50 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:28.319 07:15:50 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:28.319 07:15:50 -- common/autotest_common.sh@871 -- # break 00:06:28.319 07:15:50 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:28.319 07:15:50 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:28.319 07:15:50 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.319 1+0 records in 00:06:28.319 1+0 records out 00:06:28.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000773413 s, 5.3 MB/s 00:06:28.319 07:15:50 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.319 07:15:50 -- common/autotest_common.sh@884 -- # size=4096 00:06:28.319 07:15:50 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.319 07:15:50 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:28.319 07:15:50 -- common/autotest_common.sh@887 -- # return 0 00:06:28.319 07:15:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.319 07:15:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.319 07:15:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.319 07:15:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.319 07:15:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.578 { 00:06:28.578 "nbd_device": "/dev/nbd0", 00:06:28.578 "bdev_name": "Malloc0" 00:06:28.578 }, 00:06:28.578 { 00:06:28.578 "nbd_device": "/dev/nbd1", 00:06:28.578 "bdev_name": "Malloc1" 00:06:28.578 } 00:06:28.578 ]' 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.578 { 00:06:28.578 "nbd_device": "/dev/nbd0", 00:06:28.578 "bdev_name": "Malloc0" 00:06:28.578 }, 00:06:28.578 { 00:06:28.578 "nbd_device": "/dev/nbd1", 00:06:28.578 "bdev_name": "Malloc1" 00:06:28.578 } 00:06:28.578 ]' 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.578 /dev/nbd1' 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.578 /dev/nbd1' 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.578 256+0 records in 00:06:28.578 256+0 records out 00:06:28.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010711 s, 97.9 MB/s 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.578 256+0 records in 00:06:28.578 256+0 records out 00:06:28.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215583 s, 48.6 MB/s 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.578 256+0 records in 00:06:28.578 256+0 records out 00:06:28.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232745 s, 45.1 MB/s 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@51 -- # local i 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.578 07:15:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.836 07:15:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.836 07:15:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.836 07:15:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.836 07:15:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.836 07:15:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.836 07:15:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.836 07:15:51 -- bdev/nbd_common.sh@41 -- # break 00:06:28.836 07:15:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.836 07:15:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.836 07:15:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.404 07:15:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.404 07:15:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.404 07:15:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.404 07:15:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.404 07:15:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.404 07:15:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.404 07:15:51 -- bdev/nbd_common.sh@41 -- # break 00:06:29.404 07:15:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.404 07:15:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.404 07:15:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.404 07:15:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.662 07:15:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.662 07:15:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.662 07:15:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.662 07:15:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.662 07:15:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.663 07:15:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.663 07:15:51 -- bdev/nbd_common.sh@65 -- # true 00:06:29.663 07:15:51 -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.663 07:15:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.663 07:15:51 -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.663 07:15:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.663 07:15:51 -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.663 07:15:51 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.921 07:15:52 -- event/event.sh@35 -- # sleep 3 00:06:30.179 [2024-11-28 07:15:52.242322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.179 [2024-11-28 07:15:52.308936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.179 [2024-11-28 07:15:52.308948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.179 [2024-11-28 07:15:52.367760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.179 [2024-11-28 07:15:52.367821] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.892 spdk_app_start Round 1 00:06:32.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.892 07:15:55 -- event/event.sh@23 -- # for i in {0..2} 00:06:32.892 07:15:55 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:32.892 07:15:55 -- event/event.sh@25 -- # waitforlisten 67053 /var/tmp/spdk-nbd.sock 00:06:32.892 07:15:55 -- common/autotest_common.sh@829 -- # '[' -z 67053 ']' 00:06:32.892 07:15:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.892 07:15:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.892 07:15:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.892 07:15:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.892 07:15:55 -- common/autotest_common.sh@10 -- # set +x 00:06:33.155 07:15:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.155 07:15:55 -- common/autotest_common.sh@862 -- # return 0 00:06:33.155 07:15:55 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.418 Malloc0 00:06:33.677 07:15:55 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.936 Malloc1 00:06:33.936 07:15:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.936 07:15:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.936 07:15:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.936 07:15:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.936 07:15:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.936 07:15:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.936 07:15:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.936 07:15:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.936 07:15:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.936 07:15:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.936 07:15:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.936 07:15:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.936 07:15:56 -- bdev/nbd_common.sh@12 -- # local i 00:06:33.936 07:15:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.936 07:15:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.936 07:15:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.195 /dev/nbd0 00:06:34.195 07:15:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.195 07:15:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.195 07:15:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:34.195 07:15:56 -- common/autotest_common.sh@867 -- # local i 00:06:34.195 07:15:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:34.195 07:15:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:34.195 07:15:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:34.195 07:15:56 -- common/autotest_common.sh@871 -- # break 00:06:34.195 07:15:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:34.195 07:15:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:34.195 07:15:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.195 1+0 records in 00:06:34.195 1+0 records out 00:06:34.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309618 s, 13.2 MB/s 00:06:34.195 07:15:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.195 07:15:56 -- common/autotest_common.sh@884 -- # size=4096 00:06:34.196 07:15:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.196 07:15:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:34.196 07:15:56 -- common/autotest_common.sh@887 -- # return 0 00:06:34.196 07:15:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.196 07:15:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.196 07:15:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.455 /dev/nbd1 00:06:34.455 07:15:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.455 07:15:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.455 07:15:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:34.455 07:15:56 -- common/autotest_common.sh@867 -- # local i 00:06:34.455 07:15:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:34.455 07:15:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:34.455 07:15:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:34.455 07:15:56 -- common/autotest_common.sh@871 -- # break 00:06:34.455 07:15:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:34.455 07:15:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:34.455 07:15:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.455 1+0 records in 00:06:34.455 1+0 records out 00:06:34.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235016 s, 17.4 MB/s 00:06:34.455 07:15:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.455 07:15:56 -- common/autotest_common.sh@884 -- # size=4096 00:06:34.455 07:15:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:34.455 07:15:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:34.455 07:15:56 -- common/autotest_common.sh@887 -- # return 0 00:06:34.455 07:15:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.455 07:15:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.455 07:15:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.455 07:15:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.455 07:15:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.714 { 00:06:34.714 "nbd_device": "/dev/nbd0", 00:06:34.714 "bdev_name": "Malloc0" 00:06:34.714 }, 00:06:34.714 { 00:06:34.714 "nbd_device": "/dev/nbd1", 00:06:34.714 "bdev_name": "Malloc1" 00:06:34.714 } 00:06:34.714 ]' 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.714 { 00:06:34.714 "nbd_device": "/dev/nbd0", 00:06:34.714 "bdev_name": "Malloc0" 00:06:34.714 }, 00:06:34.714 { 00:06:34.714 "nbd_device": "/dev/nbd1", 00:06:34.714 "bdev_name": "Malloc1" 00:06:34.714 } 00:06:34.714 ]' 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.714 /dev/nbd1' 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.714 /dev/nbd1' 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@65 -- # count=2 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@95 -- # count=2 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:34.714 256+0 records in 00:06:34.714 256+0 records out 00:06:34.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00795393 s, 132 MB/s 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:34.714 256+0 records in 00:06:34.714 256+0 records out 00:06:34.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292318 s, 35.9 MB/s 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.714 07:15:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.973 256+0 records in 00:06:34.973 256+0 records out 00:06:34.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.03256 s, 32.2 MB/s 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@51 -- # local i 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.973 07:15:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.233 07:15:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.233 07:15:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.233 07:15:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.233 07:15:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.233 07:15:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.233 07:15:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.233 07:15:57 -- bdev/nbd_common.sh@41 -- # break 00:06:35.233 07:15:57 -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.233 07:15:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.233 07:15:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.492 07:15:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.492 07:15:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.492 07:15:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.492 07:15:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.492 07:15:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.492 07:15:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.492 07:15:57 -- bdev/nbd_common.sh@41 -- # break 00:06:35.492 07:15:57 -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.492 07:15:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.492 07:15:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.492 07:15:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@65 -- # true 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.751 07:15:57 -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.751 07:15:57 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.319 07:15:58 -- event/event.sh@35 -- # sleep 3 00:06:36.319 [2024-11-28 07:15:58.502587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.319 [2024-11-28 07:15:58.571576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.319 [2024-11-28 07:15:58.571586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.577 [2024-11-28 07:15:58.631171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:36.577 [2024-11-28 07:15:58.631241] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.111 07:16:01 -- event/event.sh@23 -- # for i in {0..2} 00:06:39.111 07:16:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:39.111 spdk_app_start Round 2 00:06:39.111 07:16:01 -- event/event.sh@25 -- # waitforlisten 67053 /var/tmp/spdk-nbd.sock 00:06:39.111 07:16:01 -- common/autotest_common.sh@829 -- # '[' -z 67053 ']' 00:06:39.111 07:16:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.111 07:16:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.111 07:16:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.111 07:16:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.111 07:16:01 -- common/autotest_common.sh@10 -- # set +x 00:06:39.369 07:16:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.369 07:16:01 -- common/autotest_common.sh@862 -- # return 0 00:06:39.369 07:16:01 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.627 Malloc0 00:06:39.627 07:16:01 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.885 Malloc1 00:06:39.885 07:16:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@12 -- # local i 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.885 07:16:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.145 /dev/nbd0 00:06:40.145 07:16:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.145 07:16:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.145 07:16:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:40.145 07:16:02 -- common/autotest_common.sh@867 -- # local i 00:06:40.145 07:16:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:40.145 07:16:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:40.145 07:16:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:40.145 07:16:02 -- common/autotest_common.sh@871 -- # break 00:06:40.145 07:16:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:40.145 07:16:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:40.145 07:16:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.145 1+0 records in 00:06:40.145 1+0 records out 00:06:40.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320552 s, 12.8 MB/s 00:06:40.145 07:16:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.145 07:16:02 -- common/autotest_common.sh@884 -- # size=4096 00:06:40.145 07:16:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.145 07:16:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.145 07:16:02 -- common/autotest_common.sh@887 -- # return 0 00:06:40.145 07:16:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.145 07:16:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.145 07:16:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.713 /dev/nbd1 00:06:40.713 07:16:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.713 07:16:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.713 07:16:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:40.713 07:16:02 -- common/autotest_common.sh@867 -- # local i 00:06:40.713 07:16:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:40.713 07:16:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:40.713 07:16:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:40.713 07:16:02 -- common/autotest_common.sh@871 -- # break 00:06:40.713 07:16:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:40.713 07:16:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:40.713 07:16:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.713 1+0 records in 00:06:40.713 1+0 records out 00:06:40.713 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239357 s, 17.1 MB/s 00:06:40.713 07:16:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.713 07:16:02 -- common/autotest_common.sh@884 -- # size=4096 00:06:40.713 07:16:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:40.713 07:16:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:40.713 07:16:02 -- common/autotest_common.sh@887 -- # return 0 00:06:40.713 07:16:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.713 07:16:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.713 07:16:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.713 07:16:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.713 07:16:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.973 { 00:06:40.973 "nbd_device": "/dev/nbd0", 00:06:40.973 "bdev_name": "Malloc0" 00:06:40.973 }, 00:06:40.973 { 00:06:40.973 "nbd_device": "/dev/nbd1", 00:06:40.973 "bdev_name": "Malloc1" 00:06:40.973 } 00:06:40.973 ]' 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.973 { 00:06:40.973 "nbd_device": "/dev/nbd0", 00:06:40.973 "bdev_name": "Malloc0" 00:06:40.973 }, 00:06:40.973 { 00:06:40.973 "nbd_device": "/dev/nbd1", 00:06:40.973 "bdev_name": "Malloc1" 00:06:40.973 } 00:06:40.973 ]' 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.973 /dev/nbd1' 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.973 /dev/nbd1' 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.973 256+0 records in 00:06:40.973 256+0 records out 00:06:40.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00553563 s, 189 MB/s 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.973 256+0 records in 00:06:40.973 256+0 records out 00:06:40.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254758 s, 41.2 MB/s 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.973 256+0 records in 00:06:40.973 256+0 records out 00:06:40.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269438 s, 38.9 MB/s 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@51 -- # local i 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.973 07:16:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.233 07:16:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.233 07:16:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.233 07:16:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.233 07:16:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.233 07:16:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.233 07:16:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.233 07:16:03 -- bdev/nbd_common.sh@41 -- # break 00:06:41.233 07:16:03 -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.233 07:16:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.233 07:16:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.493 07:16:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.493 07:16:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.493 07:16:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.493 07:16:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.493 07:16:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.493 07:16:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.493 07:16:03 -- bdev/nbd_common.sh@41 -- # break 00:06:41.493 07:16:03 -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.493 07:16:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.493 07:16:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.493 07:16:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.752 07:16:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.752 07:16:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.752 07:16:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.011 07:16:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.011 07:16:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.011 07:16:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.011 07:16:04 -- bdev/nbd_common.sh@65 -- # true 00:06:42.011 07:16:04 -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.011 07:16:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.011 07:16:04 -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.011 07:16:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.011 07:16:04 -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.011 07:16:04 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.270 07:16:04 -- event/event.sh@35 -- # sleep 3 00:06:42.270 [2024-11-28 07:16:04.504584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.529 [2024-11-28 07:16:04.557368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.529 [2024-11-28 07:16:04.557374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.529 [2024-11-28 07:16:04.614696] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.529 [2024-11-28 07:16:04.614764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.085 07:16:07 -- event/event.sh@38 -- # waitforlisten 67053 /var/tmp/spdk-nbd.sock 00:06:45.085 07:16:07 -- common/autotest_common.sh@829 -- # '[' -z 67053 ']' 00:06:45.085 07:16:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.085 07:16:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.085 07:16:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.085 07:16:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.085 07:16:07 -- common/autotest_common.sh@10 -- # set +x 00:06:45.344 07:16:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.344 07:16:07 -- common/autotest_common.sh@862 -- # return 0 00:06:45.344 07:16:07 -- event/event.sh@39 -- # killprocess 67053 00:06:45.344 07:16:07 -- common/autotest_common.sh@936 -- # '[' -z 67053 ']' 00:06:45.344 07:16:07 -- common/autotest_common.sh@940 -- # kill -0 67053 00:06:45.344 07:16:07 -- common/autotest_common.sh@941 -- # uname 00:06:45.344 07:16:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:45.344 07:16:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67053 00:06:45.602 07:16:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:45.602 killing process with pid 67053 00:06:45.602 07:16:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:45.602 07:16:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67053' 00:06:45.602 07:16:07 -- common/autotest_common.sh@955 -- # kill 67053 00:06:45.602 07:16:07 -- common/autotest_common.sh@960 -- # wait 67053 00:06:45.602 spdk_app_start is called in Round 0. 00:06:45.602 Shutdown signal received, stop current app iteration 00:06:45.602 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:45.602 spdk_app_start is called in Round 1. 00:06:45.602 Shutdown signal received, stop current app iteration 00:06:45.602 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:45.602 spdk_app_start is called in Round 2. 00:06:45.602 Shutdown signal received, stop current app iteration 00:06:45.602 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:06:45.602 spdk_app_start is called in Round 3. 00:06:45.602 Shutdown signal received, stop current app iteration 00:06:45.602 07:16:07 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:45.602 07:16:07 -- event/event.sh@42 -- # return 0 00:06:45.602 00:06:45.602 real 0m19.532s 00:06:45.602 user 0m44.306s 00:06:45.602 sys 0m2.885s 00:06:45.602 07:16:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.602 07:16:07 -- common/autotest_common.sh@10 -- # set +x 00:06:45.602 ************************************ 00:06:45.602 END TEST app_repeat 00:06:45.602 ************************************ 00:06:45.602 07:16:07 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:45.602 07:16:07 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:45.602 07:16:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.602 07:16:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.602 07:16:07 -- common/autotest_common.sh@10 -- # set +x 00:06:45.602 ************************************ 00:06:45.602 START TEST cpu_locks 00:06:45.602 ************************************ 00:06:45.602 07:16:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:45.861 * Looking for test storage... 00:06:45.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:45.861 07:16:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:45.861 07:16:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:45.861 07:16:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:45.861 07:16:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:45.861 07:16:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:45.861 07:16:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:45.861 07:16:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:45.861 07:16:08 -- scripts/common.sh@335 -- # IFS=.-: 00:06:45.861 07:16:08 -- scripts/common.sh@335 -- # read -ra ver1 00:06:45.861 07:16:08 -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.861 07:16:08 -- scripts/common.sh@336 -- # read -ra ver2 00:06:45.861 07:16:08 -- scripts/common.sh@337 -- # local 'op=<' 00:06:45.861 07:16:08 -- scripts/common.sh@339 -- # ver1_l=2 00:06:45.861 07:16:08 -- scripts/common.sh@340 -- # ver2_l=1 00:06:45.861 07:16:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:45.861 07:16:08 -- scripts/common.sh@343 -- # case "$op" in 00:06:45.861 07:16:08 -- scripts/common.sh@344 -- # : 1 00:06:45.861 07:16:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:45.861 07:16:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.861 07:16:08 -- scripts/common.sh@364 -- # decimal 1 00:06:45.861 07:16:08 -- scripts/common.sh@352 -- # local d=1 00:06:45.861 07:16:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.861 07:16:08 -- scripts/common.sh@354 -- # echo 1 00:06:45.861 07:16:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:45.861 07:16:08 -- scripts/common.sh@365 -- # decimal 2 00:06:45.861 07:16:08 -- scripts/common.sh@352 -- # local d=2 00:06:45.861 07:16:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.861 07:16:08 -- scripts/common.sh@354 -- # echo 2 00:06:45.861 07:16:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:45.861 07:16:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:45.861 07:16:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:45.861 07:16:08 -- scripts/common.sh@367 -- # return 0 00:06:45.861 07:16:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.861 07:16:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:45.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.861 --rc genhtml_branch_coverage=1 00:06:45.861 --rc genhtml_function_coverage=1 00:06:45.861 --rc genhtml_legend=1 00:06:45.861 --rc geninfo_all_blocks=1 00:06:45.861 --rc geninfo_unexecuted_blocks=1 00:06:45.861 00:06:45.861 ' 00:06:45.861 07:16:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:45.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.861 --rc genhtml_branch_coverage=1 00:06:45.861 --rc genhtml_function_coverage=1 00:06:45.861 --rc genhtml_legend=1 00:06:45.861 --rc geninfo_all_blocks=1 00:06:45.861 --rc geninfo_unexecuted_blocks=1 00:06:45.861 00:06:45.861 ' 00:06:45.861 07:16:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:45.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.861 --rc genhtml_branch_coverage=1 00:06:45.861 --rc genhtml_function_coverage=1 00:06:45.861 --rc genhtml_legend=1 00:06:45.861 --rc geninfo_all_blocks=1 00:06:45.861 --rc geninfo_unexecuted_blocks=1 00:06:45.861 00:06:45.861 ' 00:06:45.861 07:16:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:45.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.861 --rc genhtml_branch_coverage=1 00:06:45.861 --rc genhtml_function_coverage=1 00:06:45.861 --rc genhtml_legend=1 00:06:45.861 --rc geninfo_all_blocks=1 00:06:45.861 --rc geninfo_unexecuted_blocks=1 00:06:45.861 00:06:45.861 ' 00:06:45.861 07:16:08 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:45.861 07:16:08 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:45.861 07:16:08 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:45.861 07:16:08 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:45.861 07:16:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:45.861 07:16:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.861 07:16:08 -- common/autotest_common.sh@10 -- # set +x 00:06:45.861 ************************************ 00:06:45.861 START TEST default_locks 00:06:45.861 ************************************ 00:06:45.861 07:16:08 -- common/autotest_common.sh@1114 -- # default_locks 00:06:45.861 07:16:08 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67498 00:06:45.861 07:16:08 -- event/cpu_locks.sh@47 -- # waitforlisten 67498 00:06:45.861 07:16:08 -- common/autotest_common.sh@829 -- # '[' -z 67498 ']' 00:06:45.861 07:16:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.861 07:16:08 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.861 07:16:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.861 07:16:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.861 07:16:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.861 07:16:08 -- common/autotest_common.sh@10 -- # set +x 00:06:45.861 [2024-11-28 07:16:08.111661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.862 [2024-11-28 07:16:08.111762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67498 ] 00:06:46.120 [2024-11-28 07:16:08.254118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.120 [2024-11-28 07:16:08.356538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.120 [2024-11-28 07:16:08.356734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.054 07:16:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.054 07:16:09 -- common/autotest_common.sh@862 -- # return 0 00:06:47.054 07:16:09 -- event/cpu_locks.sh@49 -- # locks_exist 67498 00:06:47.054 07:16:09 -- event/cpu_locks.sh@22 -- # lslocks -p 67498 00:06:47.054 07:16:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.312 07:16:09 -- event/cpu_locks.sh@50 -- # killprocess 67498 00:06:47.312 07:16:09 -- common/autotest_common.sh@936 -- # '[' -z 67498 ']' 00:06:47.312 07:16:09 -- common/autotest_common.sh@940 -- # kill -0 67498 00:06:47.312 07:16:09 -- common/autotest_common.sh@941 -- # uname 00:06:47.570 07:16:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:47.570 07:16:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67498 00:06:47.570 07:16:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:47.570 07:16:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:47.570 killing process with pid 67498 00:06:47.570 07:16:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67498' 00:06:47.570 07:16:09 -- common/autotest_common.sh@955 -- # kill 67498 00:06:47.570 07:16:09 -- common/autotest_common.sh@960 -- # wait 67498 00:06:47.827 07:16:09 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67498 00:06:47.827 07:16:09 -- common/autotest_common.sh@650 -- # local es=0 00:06:47.827 07:16:09 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67498 00:06:47.827 07:16:09 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:47.827 07:16:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.827 07:16:09 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:47.827 07:16:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.827 07:16:09 -- common/autotest_common.sh@653 -- # waitforlisten 67498 00:06:47.827 07:16:09 -- common/autotest_common.sh@829 -- # '[' -z 67498 ']' 00:06:47.827 07:16:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.827 07:16:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.827 07:16:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.827 07:16:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.827 07:16:10 -- common/autotest_common.sh@10 -- # set +x 00:06:47.827 ERROR: process (pid: 67498) is no longer running 00:06:47.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67498) - No such process 00:06:47.827 07:16:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.827 07:16:10 -- common/autotest_common.sh@862 -- # return 1 00:06:47.827 07:16:10 -- common/autotest_common.sh@653 -- # es=1 00:06:47.827 07:16:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.827 07:16:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:47.827 07:16:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.827 07:16:10 -- event/cpu_locks.sh@54 -- # no_locks 00:06:47.827 07:16:10 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:47.827 07:16:10 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:47.827 07:16:10 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:47.827 00:06:47.827 real 0m1.954s 00:06:47.827 user 0m2.138s 00:06:47.827 sys 0m0.586s 00:06:47.827 07:16:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.827 07:16:10 -- common/autotest_common.sh@10 -- # set +x 00:06:47.827 ************************************ 00:06:47.827 END TEST default_locks 00:06:47.827 ************************************ 00:06:47.827 07:16:10 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:47.827 07:16:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.827 07:16:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.827 07:16:10 -- common/autotest_common.sh@10 -- # set +x 00:06:47.827 ************************************ 00:06:47.827 START TEST default_locks_via_rpc 00:06:47.827 ************************************ 00:06:47.827 07:16:10 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:47.827 07:16:10 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67550 00:06:47.827 07:16:10 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.827 07:16:10 -- event/cpu_locks.sh@63 -- # waitforlisten 67550 00:06:47.827 07:16:10 -- common/autotest_common.sh@829 -- # '[' -z 67550 ']' 00:06:47.827 07:16:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.827 07:16:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.827 07:16:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.827 07:16:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.827 07:16:10 -- common/autotest_common.sh@10 -- # set +x 00:06:48.083 [2024-11-28 07:16:10.115603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.083 [2024-11-28 07:16:10.115700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67550 ] 00:06:48.083 [2024-11-28 07:16:10.256009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.083 [2024-11-28 07:16:10.351420] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:48.083 [2024-11-28 07:16:10.351663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.015 07:16:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.015 07:16:11 -- common/autotest_common.sh@862 -- # return 0 00:06:49.015 07:16:11 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:49.015 07:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.015 07:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.015 07:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.015 07:16:11 -- event/cpu_locks.sh@67 -- # no_locks 00:06:49.015 07:16:11 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.015 07:16:11 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.015 07:16:11 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.015 07:16:11 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:49.015 07:16:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.015 07:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.015 07:16:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.015 07:16:11 -- event/cpu_locks.sh@71 -- # locks_exist 67550 00:06:49.015 07:16:11 -- event/cpu_locks.sh@22 -- # lslocks -p 67550 00:06:49.015 07:16:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.582 07:16:11 -- event/cpu_locks.sh@73 -- # killprocess 67550 00:06:49.582 07:16:11 -- common/autotest_common.sh@936 -- # '[' -z 67550 ']' 00:06:49.582 07:16:11 -- common/autotest_common.sh@940 -- # kill -0 67550 00:06:49.582 07:16:11 -- common/autotest_common.sh@941 -- # uname 00:06:49.582 07:16:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:49.582 07:16:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67550 00:06:49.582 07:16:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:49.582 killing process with pid 67550 00:06:49.582 07:16:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:49.582 07:16:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67550' 00:06:49.582 07:16:11 -- common/autotest_common.sh@955 -- # kill 67550 00:06:49.582 07:16:11 -- common/autotest_common.sh@960 -- # wait 67550 00:06:49.840 00:06:49.840 real 0m1.940s 00:06:49.840 user 0m2.097s 00:06:49.840 sys 0m0.598s 00:06:49.840 07:16:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:49.840 07:16:11 -- common/autotest_common.sh@10 -- # set +x 00:06:49.840 ************************************ 00:06:49.840 END TEST default_locks_via_rpc 00:06:49.840 ************************************ 00:06:49.840 07:16:12 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:49.840 07:16:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.840 07:16:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.840 07:16:12 -- common/autotest_common.sh@10 -- # set +x 00:06:49.840 ************************************ 00:06:49.840 START TEST non_locking_app_on_locked_coremask 00:06:49.840 ************************************ 00:06:49.840 07:16:12 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:49.840 07:16:12 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67601 00:06:49.840 07:16:12 -- event/cpu_locks.sh@81 -- # waitforlisten 67601 /var/tmp/spdk.sock 00:06:49.840 07:16:12 -- common/autotest_common.sh@829 -- # '[' -z 67601 ']' 00:06:49.840 07:16:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.840 07:16:12 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.840 07:16:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.840 07:16:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.840 07:16:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.840 07:16:12 -- common/autotest_common.sh@10 -- # set +x 00:06:49.840 [2024-11-28 07:16:12.099800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:49.840 [2024-11-28 07:16:12.099940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67601 ] 00:06:50.098 [2024-11-28 07:16:12.241300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.098 [2024-11-28 07:16:12.325111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.098 [2024-11-28 07:16:12.325277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.033 07:16:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.033 07:16:13 -- common/autotest_common.sh@862 -- # return 0 00:06:51.033 07:16:13 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67617 00:06:51.033 07:16:13 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:51.033 07:16:13 -- event/cpu_locks.sh@85 -- # waitforlisten 67617 /var/tmp/spdk2.sock 00:06:51.033 07:16:13 -- common/autotest_common.sh@829 -- # '[' -z 67617 ']' 00:06:51.033 07:16:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.033 07:16:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.033 07:16:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.033 07:16:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.033 07:16:13 -- common/autotest_common.sh@10 -- # set +x 00:06:51.033 [2024-11-28 07:16:13.156218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.033 [2024-11-28 07:16:13.156398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67617 ] 00:06:51.033 [2024-11-28 07:16:13.304928] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.033 [2024-11-28 07:16:13.305017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.291 [2024-11-28 07:16:13.523025] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.291 [2024-11-28 07:16:13.523212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.226 07:16:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.226 07:16:14 -- common/autotest_common.sh@862 -- # return 0 00:06:52.226 07:16:14 -- event/cpu_locks.sh@87 -- # locks_exist 67601 00:06:52.226 07:16:14 -- event/cpu_locks.sh@22 -- # lslocks -p 67601 00:06:52.226 07:16:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.793 07:16:14 -- event/cpu_locks.sh@89 -- # killprocess 67601 00:06:52.793 07:16:14 -- common/autotest_common.sh@936 -- # '[' -z 67601 ']' 00:06:52.793 07:16:14 -- common/autotest_common.sh@940 -- # kill -0 67601 00:06:52.793 07:16:14 -- common/autotest_common.sh@941 -- # uname 00:06:52.793 07:16:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:52.793 07:16:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67601 00:06:52.793 07:16:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:52.793 killing process with pid 67601 00:06:52.793 07:16:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:52.793 07:16:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67601' 00:06:52.793 07:16:14 -- common/autotest_common.sh@955 -- # kill 67601 00:06:52.793 07:16:14 -- common/autotest_common.sh@960 -- # wait 67601 00:06:53.730 07:16:15 -- event/cpu_locks.sh@90 -- # killprocess 67617 00:06:53.730 07:16:15 -- common/autotest_common.sh@936 -- # '[' -z 67617 ']' 00:06:53.730 07:16:15 -- common/autotest_common.sh@940 -- # kill -0 67617 00:06:53.730 07:16:15 -- common/autotest_common.sh@941 -- # uname 00:06:53.730 07:16:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:53.730 07:16:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67617 00:06:53.730 07:16:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:53.730 killing process with pid 67617 00:06:53.730 07:16:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:53.730 07:16:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67617' 00:06:53.730 07:16:15 -- common/autotest_common.sh@955 -- # kill 67617 00:06:53.730 07:16:15 -- common/autotest_common.sh@960 -- # wait 67617 00:06:53.989 00:06:53.989 real 0m4.180s 00:06:53.989 user 0m4.620s 00:06:53.989 sys 0m1.172s 00:06:53.989 ************************************ 00:06:53.989 END TEST non_locking_app_on_locked_coremask 00:06:53.989 07:16:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:53.989 07:16:16 -- common/autotest_common.sh@10 -- # set +x 00:06:53.989 ************************************ 00:06:53.989 07:16:16 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:53.989 07:16:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:53.989 07:16:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.989 07:16:16 -- common/autotest_common.sh@10 -- # set +x 00:06:54.248 ************************************ 00:06:54.248 START TEST locking_app_on_unlocked_coremask 00:06:54.248 ************************************ 00:06:54.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.248 07:16:16 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:54.248 07:16:16 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67692 00:06:54.248 07:16:16 -- event/cpu_locks.sh@99 -- # waitforlisten 67692 /var/tmp/spdk.sock 00:06:54.248 07:16:16 -- common/autotest_common.sh@829 -- # '[' -z 67692 ']' 00:06:54.248 07:16:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.248 07:16:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.248 07:16:16 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:54.248 07:16:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.248 07:16:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.248 07:16:16 -- common/autotest_common.sh@10 -- # set +x 00:06:54.248 [2024-11-28 07:16:16.335927] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.248 [2024-11-28 07:16:16.336054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67692 ] 00:06:54.248 [2024-11-28 07:16:16.472017] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.248 [2024-11-28 07:16:16.472404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.506 [2024-11-28 07:16:16.565452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:54.506 [2024-11-28 07:16:16.565635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.441 07:16:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.441 07:16:17 -- common/autotest_common.sh@862 -- # return 0 00:06:55.441 07:16:17 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67708 00:06:55.441 07:16:17 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:55.441 07:16:17 -- event/cpu_locks.sh@103 -- # waitforlisten 67708 /var/tmp/spdk2.sock 00:06:55.441 07:16:17 -- common/autotest_common.sh@829 -- # '[' -z 67708 ']' 00:06:55.441 07:16:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.441 07:16:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.441 07:16:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.441 07:16:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.441 07:16:17 -- common/autotest_common.sh@10 -- # set +x 00:06:55.441 [2024-11-28 07:16:17.414295] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.441 [2024-11-28 07:16:17.414434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67708 ] 00:06:55.441 [2024-11-28 07:16:17.561009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.700 [2024-11-28 07:16:17.752375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:55.700 [2024-11-28 07:16:17.752591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.267 07:16:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.267 07:16:18 -- common/autotest_common.sh@862 -- # return 0 00:06:56.267 07:16:18 -- event/cpu_locks.sh@105 -- # locks_exist 67708 00:06:56.267 07:16:18 -- event/cpu_locks.sh@22 -- # lslocks -p 67708 00:06:56.267 07:16:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.202 07:16:19 -- event/cpu_locks.sh@107 -- # killprocess 67692 00:06:57.202 07:16:19 -- common/autotest_common.sh@936 -- # '[' -z 67692 ']' 00:06:57.202 07:16:19 -- common/autotest_common.sh@940 -- # kill -0 67692 00:06:57.202 07:16:19 -- common/autotest_common.sh@941 -- # uname 00:06:57.202 07:16:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:57.202 07:16:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67692 00:06:57.202 07:16:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:57.202 killing process with pid 67692 00:06:57.202 07:16:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:57.202 07:16:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67692' 00:06:57.202 07:16:19 -- common/autotest_common.sh@955 -- # kill 67692 00:06:57.202 07:16:19 -- common/autotest_common.sh@960 -- # wait 67692 00:06:58.135 07:16:20 -- event/cpu_locks.sh@108 -- # killprocess 67708 00:06:58.135 07:16:20 -- common/autotest_common.sh@936 -- # '[' -z 67708 ']' 00:06:58.135 07:16:20 -- common/autotest_common.sh@940 -- # kill -0 67708 00:06:58.135 07:16:20 -- common/autotest_common.sh@941 -- # uname 00:06:58.135 07:16:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.135 07:16:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67708 00:06:58.135 07:16:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:58.135 killing process with pid 67708 00:06:58.135 07:16:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:58.135 07:16:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67708' 00:06:58.135 07:16:20 -- common/autotest_common.sh@955 -- # kill 67708 00:06:58.135 07:16:20 -- common/autotest_common.sh@960 -- # wait 67708 00:06:58.393 00:06:58.393 real 0m4.346s 00:06:58.393 user 0m4.896s 00:06:58.393 sys 0m1.137s 00:06:58.393 07:16:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.393 07:16:20 -- common/autotest_common.sh@10 -- # set +x 00:06:58.393 ************************************ 00:06:58.393 END TEST locking_app_on_unlocked_coremask 00:06:58.393 ************************************ 00:06:58.393 07:16:20 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:58.393 07:16:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:58.393 07:16:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.393 07:16:20 -- common/autotest_common.sh@10 -- # set +x 00:06:58.650 ************************************ 00:06:58.650 START TEST locking_app_on_locked_coremask 00:06:58.650 ************************************ 00:06:58.650 07:16:20 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:06:58.650 07:16:20 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67776 00:06:58.650 07:16:20 -- event/cpu_locks.sh@116 -- # waitforlisten 67776 /var/tmp/spdk.sock 00:06:58.650 07:16:20 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.650 07:16:20 -- common/autotest_common.sh@829 -- # '[' -z 67776 ']' 00:06:58.650 07:16:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.650 07:16:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.650 07:16:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.650 07:16:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.650 07:16:20 -- common/autotest_common.sh@10 -- # set +x 00:06:58.650 [2024-11-28 07:16:20.737060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.650 [2024-11-28 07:16:20.737175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67776 ] 00:06:58.650 [2024-11-28 07:16:20.879794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.908 [2024-11-28 07:16:20.984955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.908 [2024-11-28 07:16:20.985185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.860 07:16:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.860 07:16:21 -- common/autotest_common.sh@862 -- # return 0 00:06:59.860 07:16:21 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67792 00:06:59.860 07:16:21 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:59.860 07:16:21 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67792 /var/tmp/spdk2.sock 00:06:59.860 07:16:21 -- common/autotest_common.sh@650 -- # local es=0 00:06:59.861 07:16:21 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67792 /var/tmp/spdk2.sock 00:06:59.861 07:16:21 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:59.861 07:16:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.861 07:16:21 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:59.861 07:16:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:59.861 07:16:21 -- common/autotest_common.sh@653 -- # waitforlisten 67792 /var/tmp/spdk2.sock 00:06:59.861 07:16:21 -- common/autotest_common.sh@829 -- # '[' -z 67792 ']' 00:06:59.861 07:16:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.861 07:16:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.861 07:16:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.861 07:16:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.861 07:16:21 -- common/autotest_common.sh@10 -- # set +x 00:06:59.861 [2024-11-28 07:16:21.840885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.861 [2024-11-28 07:16:21.841023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67792 ] 00:06:59.861 [2024-11-28 07:16:21.985088] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67776 has claimed it. 00:06:59.861 [2024-11-28 07:16:21.985196] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.427 ERROR: process (pid: 67792) is no longer running 00:07:00.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67792) - No such process 00:07:00.427 07:16:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.427 07:16:22 -- common/autotest_common.sh@862 -- # return 1 00:07:00.427 07:16:22 -- common/autotest_common.sh@653 -- # es=1 00:07:00.427 07:16:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:00.427 07:16:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:00.427 07:16:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:00.427 07:16:22 -- event/cpu_locks.sh@122 -- # locks_exist 67776 00:07:00.427 07:16:22 -- event/cpu_locks.sh@22 -- # lslocks -p 67776 00:07:00.427 07:16:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.685 07:16:22 -- event/cpu_locks.sh@124 -- # killprocess 67776 00:07:00.685 07:16:22 -- common/autotest_common.sh@936 -- # '[' -z 67776 ']' 00:07:00.685 07:16:22 -- common/autotest_common.sh@940 -- # kill -0 67776 00:07:00.685 07:16:22 -- common/autotest_common.sh@941 -- # uname 00:07:00.685 07:16:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:00.685 07:16:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67776 00:07:00.685 07:16:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:00.685 07:16:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:00.685 killing process with pid 67776 00:07:00.685 07:16:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67776' 00:07:00.685 07:16:22 -- common/autotest_common.sh@955 -- # kill 67776 00:07:00.685 07:16:22 -- common/autotest_common.sh@960 -- # wait 67776 00:07:01.252 00:07:01.252 real 0m2.633s 00:07:01.252 user 0m3.060s 00:07:01.252 sys 0m0.643s 00:07:01.252 07:16:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.252 07:16:23 -- common/autotest_common.sh@10 -- # set +x 00:07:01.252 ************************************ 00:07:01.252 END TEST locking_app_on_locked_coremask 00:07:01.252 ************************************ 00:07:01.252 07:16:23 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:01.252 07:16:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.252 07:16:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.252 07:16:23 -- common/autotest_common.sh@10 -- # set +x 00:07:01.252 ************************************ 00:07:01.252 START TEST locking_overlapped_coremask 00:07:01.252 ************************************ 00:07:01.252 07:16:23 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:07:01.252 07:16:23 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67834 00:07:01.252 07:16:23 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:01.252 07:16:23 -- event/cpu_locks.sh@133 -- # waitforlisten 67834 /var/tmp/spdk.sock 00:07:01.252 07:16:23 -- common/autotest_common.sh@829 -- # '[' -z 67834 ']' 00:07:01.252 07:16:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.252 07:16:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.252 07:16:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.252 07:16:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.252 07:16:23 -- common/autotest_common.sh@10 -- # set +x 00:07:01.252 [2024-11-28 07:16:23.429559] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.252 [2024-11-28 07:16:23.429693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67834 ] 00:07:01.512 [2024-11-28 07:16:23.570902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.512 [2024-11-28 07:16:23.672535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:01.512 [2024-11-28 07:16:23.672876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.512 [2024-11-28 07:16:23.673015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.512 [2024-11-28 07:16:23.673020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.445 07:16:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.445 07:16:24 -- common/autotest_common.sh@862 -- # return 0 00:07:02.445 07:16:24 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67856 00:07:02.445 07:16:24 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67856 /var/tmp/spdk2.sock 00:07:02.445 07:16:24 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:02.445 07:16:24 -- common/autotest_common.sh@650 -- # local es=0 00:07:02.445 07:16:24 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67856 /var/tmp/spdk2.sock 00:07:02.445 07:16:24 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:02.445 07:16:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.445 07:16:24 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:02.445 07:16:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.445 07:16:24 -- common/autotest_common.sh@653 -- # waitforlisten 67856 /var/tmp/spdk2.sock 00:07:02.445 07:16:24 -- common/autotest_common.sh@829 -- # '[' -z 67856 ']' 00:07:02.445 07:16:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.445 07:16:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:02.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.445 07:16:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.445 07:16:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:02.445 07:16:24 -- common/autotest_common.sh@10 -- # set +x 00:07:02.445 [2024-11-28 07:16:24.512627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.445 [2024-11-28 07:16:24.512788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67856 ] 00:07:02.445 [2024-11-28 07:16:24.678261] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67834 has claimed it. 00:07:02.445 [2024-11-28 07:16:24.678399] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.011 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67856) - No such process 00:07:03.011 ERROR: process (pid: 67856) is no longer running 00:07:03.011 07:16:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.011 07:16:25 -- common/autotest_common.sh@862 -- # return 1 00:07:03.011 07:16:25 -- common/autotest_common.sh@653 -- # es=1 00:07:03.011 07:16:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.011 07:16:25 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.011 07:16:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.011 07:16:25 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:03.011 07:16:25 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:03.011 07:16:25 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:03.011 07:16:25 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:03.011 07:16:25 -- event/cpu_locks.sh@141 -- # killprocess 67834 00:07:03.011 07:16:25 -- common/autotest_common.sh@936 -- # '[' -z 67834 ']' 00:07:03.011 07:16:25 -- common/autotest_common.sh@940 -- # kill -0 67834 00:07:03.011 07:16:25 -- common/autotest_common.sh@941 -- # uname 00:07:03.011 07:16:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.011 07:16:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67834 00:07:03.011 killing process with pid 67834 00:07:03.011 07:16:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:03.011 07:16:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:03.011 07:16:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67834' 00:07:03.011 07:16:25 -- common/autotest_common.sh@955 -- # kill 67834 00:07:03.011 07:16:25 -- common/autotest_common.sh@960 -- # wait 67834 00:07:03.577 ************************************ 00:07:03.577 END TEST locking_overlapped_coremask 00:07:03.577 ************************************ 00:07:03.577 00:07:03.577 real 0m2.287s 00:07:03.577 user 0m6.368s 00:07:03.577 sys 0m0.473s 00:07:03.577 07:16:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.577 07:16:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.577 07:16:25 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:03.577 07:16:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:03.577 07:16:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.577 07:16:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.577 ************************************ 00:07:03.577 START TEST locking_overlapped_coremask_via_rpc 00:07:03.577 ************************************ 00:07:03.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.577 07:16:25 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:07:03.577 07:16:25 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67901 00:07:03.577 07:16:25 -- event/cpu_locks.sh@149 -- # waitforlisten 67901 /var/tmp/spdk.sock 00:07:03.577 07:16:25 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:03.577 07:16:25 -- common/autotest_common.sh@829 -- # '[' -z 67901 ']' 00:07:03.577 07:16:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.578 07:16:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.578 07:16:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.578 07:16:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.578 07:16:25 -- common/autotest_common.sh@10 -- # set +x 00:07:03.578 [2024-11-28 07:16:25.764047] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.578 [2024-11-28 07:16:25.764163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67901 ] 00:07:03.836 [2024-11-28 07:16:25.900547] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.836 [2024-11-28 07:16:25.900638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.836 [2024-11-28 07:16:26.004725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:03.836 [2024-11-28 07:16:26.005299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.836 [2024-11-28 07:16:26.005452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.836 [2024-11-28 07:16:26.005457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.770 07:16:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.770 07:16:26 -- common/autotest_common.sh@862 -- # return 0 00:07:04.770 07:16:26 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:04.770 07:16:26 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67919 00:07:04.770 07:16:26 -- event/cpu_locks.sh@153 -- # waitforlisten 67919 /var/tmp/spdk2.sock 00:07:04.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.771 07:16:26 -- common/autotest_common.sh@829 -- # '[' -z 67919 ']' 00:07:04.771 07:16:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.771 07:16:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.771 07:16:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.771 07:16:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.771 07:16:26 -- common/autotest_common.sh@10 -- # set +x 00:07:04.771 [2024-11-28 07:16:26.912144] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:04.771 [2024-11-28 07:16:26.912655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67919 ] 00:07:05.029 [2024-11-28 07:16:27.063929] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:05.029 [2024-11-28 07:16:27.064013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.029 [2024-11-28 07:16:27.292278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:05.029 [2024-11-28 07:16:27.296822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.029 [2024-11-28 07:16:27.296985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.029 [2024-11-28 07:16:27.296985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:05.964 07:16:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.964 07:16:28 -- common/autotest_common.sh@862 -- # return 0 00:07:05.964 07:16:28 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.964 07:16:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.964 07:16:28 -- common/autotest_common.sh@10 -- # set +x 00:07:05.964 07:16:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.964 07:16:28 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.964 07:16:28 -- common/autotest_common.sh@650 -- # local es=0 00:07:05.964 07:16:28 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.964 07:16:28 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:05.964 07:16:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.964 07:16:28 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:05.964 07:16:28 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.964 07:16:28 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.964 07:16:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.964 07:16:28 -- common/autotest_common.sh@10 -- # set +x 00:07:05.964 [2024-11-28 07:16:28.045632] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67901 has claimed it. 00:07:05.964 request: 00:07:05.964 { 00:07:05.964 "method": "framework_enable_cpumask_locks", 00:07:05.964 "req_id": 1 00:07:05.964 } 00:07:05.964 Got JSON-RPC error response 00:07:05.964 response: 00:07:05.964 { 00:07:05.964 "code": -32603, 00:07:05.964 "message": "Failed to claim CPU core: 2" 00:07:05.964 } 00:07:05.964 07:16:28 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:05.964 07:16:28 -- common/autotest_common.sh@653 -- # es=1 00:07:05.964 07:16:28 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.964 07:16:28 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.964 07:16:28 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.964 07:16:28 -- event/cpu_locks.sh@158 -- # waitforlisten 67901 /var/tmp/spdk.sock 00:07:05.964 07:16:28 -- common/autotest_common.sh@829 -- # '[' -z 67901 ']' 00:07:05.964 07:16:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.964 07:16:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.964 07:16:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.964 07:16:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.964 07:16:28 -- common/autotest_common.sh@10 -- # set +x 00:07:06.222 07:16:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.222 07:16:28 -- common/autotest_common.sh@862 -- # return 0 00:07:06.222 07:16:28 -- event/cpu_locks.sh@159 -- # waitforlisten 67919 /var/tmp/spdk2.sock 00:07:06.222 07:16:28 -- common/autotest_common.sh@829 -- # '[' -z 67919 ']' 00:07:06.222 07:16:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.222 07:16:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.222 07:16:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.222 07:16:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.222 07:16:28 -- common/autotest_common.sh@10 -- # set +x 00:07:06.481 ************************************ 00:07:06.481 END TEST locking_overlapped_coremask_via_rpc 00:07:06.481 ************************************ 00:07:06.481 07:16:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.481 07:16:28 -- common/autotest_common.sh@862 -- # return 0 00:07:06.481 07:16:28 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:06.481 07:16:28 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.481 07:16:28 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.481 07:16:28 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.481 00:07:06.481 real 0m3.040s 00:07:06.481 user 0m1.724s 00:07:06.481 sys 0m0.232s 00:07:06.481 07:16:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.481 07:16:28 -- common/autotest_common.sh@10 -- # set +x 00:07:06.739 07:16:28 -- event/cpu_locks.sh@174 -- # cleanup 00:07:06.739 07:16:28 -- event/cpu_locks.sh@15 -- # [[ -z 67901 ]] 00:07:06.739 07:16:28 -- event/cpu_locks.sh@15 -- # killprocess 67901 00:07:06.739 07:16:28 -- common/autotest_common.sh@936 -- # '[' -z 67901 ']' 00:07:06.739 07:16:28 -- common/autotest_common.sh@940 -- # kill -0 67901 00:07:06.739 07:16:28 -- common/autotest_common.sh@941 -- # uname 00:07:06.739 07:16:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.739 07:16:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67901 00:07:06.739 07:16:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.739 07:16:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.739 07:16:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67901' 00:07:06.739 killing process with pid 67901 00:07:06.739 07:16:28 -- common/autotest_common.sh@955 -- # kill 67901 00:07:06.739 07:16:28 -- common/autotest_common.sh@960 -- # wait 67901 00:07:07.305 07:16:29 -- event/cpu_locks.sh@16 -- # [[ -z 67919 ]] 00:07:07.305 07:16:29 -- event/cpu_locks.sh@16 -- # killprocess 67919 00:07:07.305 07:16:29 -- common/autotest_common.sh@936 -- # '[' -z 67919 ']' 00:07:07.305 07:16:29 -- common/autotest_common.sh@940 -- # kill -0 67919 00:07:07.305 07:16:29 -- common/autotest_common.sh@941 -- # uname 00:07:07.305 07:16:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.305 07:16:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67919 00:07:07.305 killing process with pid 67919 00:07:07.305 07:16:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:07.305 07:16:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:07.305 07:16:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67919' 00:07:07.305 07:16:29 -- common/autotest_common.sh@955 -- # kill 67919 00:07:07.305 07:16:29 -- common/autotest_common.sh@960 -- # wait 67919 00:07:07.563 07:16:29 -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.563 07:16:29 -- event/cpu_locks.sh@1 -- # cleanup 00:07:07.563 07:16:29 -- event/cpu_locks.sh@15 -- # [[ -z 67901 ]] 00:07:07.563 07:16:29 -- event/cpu_locks.sh@15 -- # killprocess 67901 00:07:07.563 07:16:29 -- common/autotest_common.sh@936 -- # '[' -z 67901 ']' 00:07:07.563 Process with pid 67901 is not found 00:07:07.563 07:16:29 -- common/autotest_common.sh@940 -- # kill -0 67901 00:07:07.563 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67901) - No such process 00:07:07.563 07:16:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67901 is not found' 00:07:07.563 07:16:29 -- event/cpu_locks.sh@16 -- # [[ -z 67919 ]] 00:07:07.563 Process with pid 67919 is not found 00:07:07.563 07:16:29 -- event/cpu_locks.sh@16 -- # killprocess 67919 00:07:07.563 07:16:29 -- common/autotest_common.sh@936 -- # '[' -z 67919 ']' 00:07:07.563 07:16:29 -- common/autotest_common.sh@940 -- # kill -0 67919 00:07:07.563 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67919) - No such process 00:07:07.563 07:16:29 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67919 is not found' 00:07:07.563 07:16:29 -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.563 ************************************ 00:07:07.563 END TEST cpu_locks 00:07:07.563 ************************************ 00:07:07.563 00:07:07.563 real 0m21.957s 00:07:07.563 user 0m39.773s 00:07:07.563 sys 0m5.793s 00:07:07.563 07:16:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.563 07:16:29 -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 ************************************ 00:07:07.821 END TEST event 00:07:07.821 ************************************ 00:07:07.821 00:07:07.821 real 0m50.603s 00:07:07.821 user 1m39.594s 00:07:07.821 sys 0m9.574s 00:07:07.821 07:16:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.821 07:16:29 -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 07:16:29 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:07.821 07:16:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.821 07:16:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.821 07:16:29 -- common/autotest_common.sh@10 -- # set +x 00:07:07.821 ************************************ 00:07:07.821 START TEST thread 00:07:07.821 ************************************ 00:07:07.821 07:16:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:07.821 * Looking for test storage... 00:07:07.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:07.821 07:16:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:07.821 07:16:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:07.821 07:16:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:07.821 07:16:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:07.821 07:16:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:07.821 07:16:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:07.821 07:16:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:07.821 07:16:30 -- scripts/common.sh@335 -- # IFS=.-: 00:07:07.821 07:16:30 -- scripts/common.sh@335 -- # read -ra ver1 00:07:07.821 07:16:30 -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.821 07:16:30 -- scripts/common.sh@336 -- # read -ra ver2 00:07:07.821 07:16:30 -- scripts/common.sh@337 -- # local 'op=<' 00:07:07.821 07:16:30 -- scripts/common.sh@339 -- # ver1_l=2 00:07:07.821 07:16:30 -- scripts/common.sh@340 -- # ver2_l=1 00:07:07.821 07:16:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:07.821 07:16:30 -- scripts/common.sh@343 -- # case "$op" in 00:07:07.821 07:16:30 -- scripts/common.sh@344 -- # : 1 00:07:07.821 07:16:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:07.821 07:16:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.821 07:16:30 -- scripts/common.sh@364 -- # decimal 1 00:07:07.821 07:16:30 -- scripts/common.sh@352 -- # local d=1 00:07:07.821 07:16:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.821 07:16:30 -- scripts/common.sh@354 -- # echo 1 00:07:08.079 07:16:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:08.079 07:16:30 -- scripts/common.sh@365 -- # decimal 2 00:07:08.079 07:16:30 -- scripts/common.sh@352 -- # local d=2 00:07:08.079 07:16:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.079 07:16:30 -- scripts/common.sh@354 -- # echo 2 00:07:08.079 07:16:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:08.079 07:16:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:08.079 07:16:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:08.079 07:16:30 -- scripts/common.sh@367 -- # return 0 00:07:08.079 07:16:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.079 07:16:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:08.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.079 --rc genhtml_branch_coverage=1 00:07:08.079 --rc genhtml_function_coverage=1 00:07:08.079 --rc genhtml_legend=1 00:07:08.079 --rc geninfo_all_blocks=1 00:07:08.079 --rc geninfo_unexecuted_blocks=1 00:07:08.079 00:07:08.079 ' 00:07:08.079 07:16:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:08.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.079 --rc genhtml_branch_coverage=1 00:07:08.079 --rc genhtml_function_coverage=1 00:07:08.079 --rc genhtml_legend=1 00:07:08.079 --rc geninfo_all_blocks=1 00:07:08.079 --rc geninfo_unexecuted_blocks=1 00:07:08.079 00:07:08.079 ' 00:07:08.079 07:16:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:08.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.079 --rc genhtml_branch_coverage=1 00:07:08.079 --rc genhtml_function_coverage=1 00:07:08.079 --rc genhtml_legend=1 00:07:08.079 --rc geninfo_all_blocks=1 00:07:08.079 --rc geninfo_unexecuted_blocks=1 00:07:08.080 00:07:08.080 ' 00:07:08.080 07:16:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:08.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.080 --rc genhtml_branch_coverage=1 00:07:08.080 --rc genhtml_function_coverage=1 00:07:08.080 --rc genhtml_legend=1 00:07:08.080 --rc geninfo_all_blocks=1 00:07:08.080 --rc geninfo_unexecuted_blocks=1 00:07:08.080 00:07:08.080 ' 00:07:08.080 07:16:30 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:08.080 07:16:30 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:08.080 07:16:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.080 07:16:30 -- common/autotest_common.sh@10 -- # set +x 00:07:08.080 ************************************ 00:07:08.080 START TEST thread_poller_perf 00:07:08.080 ************************************ 00:07:08.080 07:16:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:08.080 [2024-11-28 07:16:30.137670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:08.080 [2024-11-28 07:16:30.137868] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68056 ] 00:07:08.080 [2024-11-28 07:16:30.275256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.337 [2024-11-28 07:16:30.393869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.337 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:09.290 [2024-11-28T07:16:31.565Z] ====================================== 00:07:09.290 [2024-11-28T07:16:31.565Z] busy:2218515695 (cyc) 00:07:09.290 [2024-11-28T07:16:31.565Z] total_run_count: 290000 00:07:09.290 [2024-11-28T07:16:31.565Z] tsc_hz: 2200000000 (cyc) 00:07:09.290 [2024-11-28T07:16:31.565Z] ====================================== 00:07:09.290 [2024-11-28T07:16:31.565Z] poller_cost: 7650 (cyc), 3477 (nsec) 00:07:09.290 00:07:09.290 real 0m1.377s 00:07:09.290 user 0m1.189s 00:07:09.290 sys 0m0.073s 00:07:09.290 ************************************ 00:07:09.290 END TEST thread_poller_perf 00:07:09.290 ************************************ 00:07:09.290 07:16:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.290 07:16:31 -- common/autotest_common.sh@10 -- # set +x 00:07:09.290 07:16:31 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:09.290 07:16:31 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:09.290 07:16:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.290 07:16:31 -- common/autotest_common.sh@10 -- # set +x 00:07:09.290 ************************************ 00:07:09.290 START TEST thread_poller_perf 00:07:09.290 ************************************ 00:07:09.290 07:16:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:09.551 [2024-11-28 07:16:31.564418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.551 [2024-11-28 07:16:31.564544] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68092 ] 00:07:09.551 [2024-11-28 07:16:31.707048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.551 [2024-11-28 07:16:31.823095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.551 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:10.924 [2024-11-28T07:16:33.199Z] ====================================== 00:07:10.924 [2024-11-28T07:16:33.199Z] busy:2203391614 (cyc) 00:07:10.924 [2024-11-28T07:16:33.199Z] total_run_count: 4101000 00:07:10.924 [2024-11-28T07:16:33.199Z] tsc_hz: 2200000000 (cyc) 00:07:10.924 [2024-11-28T07:16:33.199Z] ====================================== 00:07:10.924 [2024-11-28T07:16:33.200Z] poller_cost: 537 (cyc), 244 (nsec) 00:07:10.925 00:07:10.925 real 0m1.363s 00:07:10.925 user 0m1.187s 00:07:10.925 sys 0m0.065s 00:07:10.925 07:16:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.925 ************************************ 00:07:10.925 END TEST thread_poller_perf 00:07:10.925 ************************************ 00:07:10.925 07:16:32 -- common/autotest_common.sh@10 -- # set +x 00:07:10.925 07:16:32 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:10.925 00:07:10.925 real 0m3.031s 00:07:10.925 user 0m2.508s 00:07:10.925 sys 0m0.295s 00:07:10.925 07:16:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.925 ************************************ 00:07:10.925 END TEST thread 00:07:10.925 07:16:32 -- common/autotest_common.sh@10 -- # set +x 00:07:10.925 ************************************ 00:07:10.925 07:16:32 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:10.925 07:16:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.925 07:16:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.925 07:16:32 -- common/autotest_common.sh@10 -- # set +x 00:07:10.925 ************************************ 00:07:10.925 START TEST accel 00:07:10.925 ************************************ 00:07:10.925 07:16:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:10.925 * Looking for test storage... 00:07:10.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:10.925 07:16:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:10.925 07:16:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:10.925 07:16:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:10.925 07:16:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:10.925 07:16:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:10.925 07:16:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:10.925 07:16:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:10.925 07:16:33 -- scripts/common.sh@335 -- # IFS=.-: 00:07:10.925 07:16:33 -- scripts/common.sh@335 -- # read -ra ver1 00:07:10.925 07:16:33 -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.925 07:16:33 -- scripts/common.sh@336 -- # read -ra ver2 00:07:10.925 07:16:33 -- scripts/common.sh@337 -- # local 'op=<' 00:07:10.925 07:16:33 -- scripts/common.sh@339 -- # ver1_l=2 00:07:10.925 07:16:33 -- scripts/common.sh@340 -- # ver2_l=1 00:07:10.925 07:16:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:10.925 07:16:33 -- scripts/common.sh@343 -- # case "$op" in 00:07:10.925 07:16:33 -- scripts/common.sh@344 -- # : 1 00:07:10.925 07:16:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:10.925 07:16:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.925 07:16:33 -- scripts/common.sh@364 -- # decimal 1 00:07:11.183 07:16:33 -- scripts/common.sh@352 -- # local d=1 00:07:11.183 07:16:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.183 07:16:33 -- scripts/common.sh@354 -- # echo 1 00:07:11.183 07:16:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:11.183 07:16:33 -- scripts/common.sh@365 -- # decimal 2 00:07:11.183 07:16:33 -- scripts/common.sh@352 -- # local d=2 00:07:11.183 07:16:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.183 07:16:33 -- scripts/common.sh@354 -- # echo 2 00:07:11.183 07:16:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:11.183 07:16:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:11.183 07:16:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:11.183 07:16:33 -- scripts/common.sh@367 -- # return 0 00:07:11.183 07:16:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.183 07:16:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:11.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.183 --rc genhtml_branch_coverage=1 00:07:11.183 --rc genhtml_function_coverage=1 00:07:11.183 --rc genhtml_legend=1 00:07:11.183 --rc geninfo_all_blocks=1 00:07:11.183 --rc geninfo_unexecuted_blocks=1 00:07:11.183 00:07:11.183 ' 00:07:11.183 07:16:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:11.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.183 --rc genhtml_branch_coverage=1 00:07:11.183 --rc genhtml_function_coverage=1 00:07:11.183 --rc genhtml_legend=1 00:07:11.183 --rc geninfo_all_blocks=1 00:07:11.183 --rc geninfo_unexecuted_blocks=1 00:07:11.183 00:07:11.183 ' 00:07:11.183 07:16:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:11.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.183 --rc genhtml_branch_coverage=1 00:07:11.183 --rc genhtml_function_coverage=1 00:07:11.183 --rc genhtml_legend=1 00:07:11.183 --rc geninfo_all_blocks=1 00:07:11.183 --rc geninfo_unexecuted_blocks=1 00:07:11.183 00:07:11.183 ' 00:07:11.183 07:16:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:11.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.183 --rc genhtml_branch_coverage=1 00:07:11.183 --rc genhtml_function_coverage=1 00:07:11.183 --rc genhtml_legend=1 00:07:11.183 --rc geninfo_all_blocks=1 00:07:11.183 --rc geninfo_unexecuted_blocks=1 00:07:11.183 00:07:11.183 ' 00:07:11.183 07:16:33 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:11.183 07:16:33 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:11.183 07:16:33 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:11.183 07:16:33 -- accel/accel.sh@59 -- # spdk_tgt_pid=68173 00:07:11.183 07:16:33 -- accel/accel.sh@60 -- # waitforlisten 68173 00:07:11.183 07:16:33 -- accel/accel.sh@58 -- # build_accel_config 00:07:11.183 07:16:33 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:11.183 07:16:33 -- common/autotest_common.sh@829 -- # '[' -z 68173 ']' 00:07:11.183 07:16:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.183 07:16:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.183 07:16:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.184 07:16:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.184 07:16:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.184 07:16:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.184 07:16:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.184 07:16:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.184 07:16:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.184 07:16:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.184 07:16:33 -- accel/accel.sh@42 -- # jq -r . 00:07:11.184 07:16:33 -- common/autotest_common.sh@10 -- # set +x 00:07:11.184 [2024-11-28 07:16:33.272566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.184 [2024-11-28 07:16:33.273019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68173 ] 00:07:11.184 [2024-11-28 07:16:33.414945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.441 [2024-11-28 07:16:33.528092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:11.441 [2024-11-28 07:16:33.528669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.375 07:16:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.375 07:16:34 -- common/autotest_common.sh@862 -- # return 0 00:07:12.375 07:16:34 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:12.375 07:16:34 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:12.375 07:16:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.375 07:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:12.375 07:16:34 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:12.375 07:16:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # IFS== 00:07:12.375 07:16:34 -- accel/accel.sh@64 -- # read -r opc module 00:07:12.375 07:16:34 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:12.375 07:16:34 -- accel/accel.sh@67 -- # killprocess 68173 00:07:12.375 07:16:34 -- common/autotest_common.sh@936 -- # '[' -z 68173 ']' 00:07:12.375 07:16:34 -- common/autotest_common.sh@940 -- # kill -0 68173 00:07:12.375 07:16:34 -- common/autotest_common.sh@941 -- # uname 00:07:12.375 07:16:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:12.375 07:16:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68173 00:07:12.375 killing process with pid 68173 00:07:12.375 07:16:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:12.375 07:16:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:12.375 07:16:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68173' 00:07:12.375 07:16:34 -- common/autotest_common.sh@955 -- # kill 68173 00:07:12.375 07:16:34 -- common/autotest_common.sh@960 -- # wait 68173 00:07:12.941 07:16:34 -- accel/accel.sh@68 -- # trap - ERR 00:07:12.941 07:16:34 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:12.941 07:16:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:12.941 07:16:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.941 07:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:12.941 07:16:34 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:07:12.941 07:16:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:12.941 07:16:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.941 07:16:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.941 07:16:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.941 07:16:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.941 07:16:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.941 07:16:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.941 07:16:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.941 07:16:34 -- accel/accel.sh@42 -- # jq -r . 00:07:12.941 07:16:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.941 07:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:12.941 07:16:35 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:12.941 07:16:35 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:12.941 07:16:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.941 07:16:35 -- common/autotest_common.sh@10 -- # set +x 00:07:12.941 ************************************ 00:07:12.941 START TEST accel_missing_filename 00:07:12.941 ************************************ 00:07:12.941 07:16:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:07:12.941 07:16:35 -- common/autotest_common.sh@650 -- # local es=0 00:07:12.941 07:16:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:12.941 07:16:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:12.941 07:16:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.941 07:16:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:12.941 07:16:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.941 07:16:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:12.941 07:16:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:12.941 07:16:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.941 07:16:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.941 07:16:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.941 07:16:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.941 07:16:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.941 07:16:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.941 07:16:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.941 07:16:35 -- accel/accel.sh@42 -- # jq -r . 00:07:12.941 [2024-11-28 07:16:35.063725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.941 [2024-11-28 07:16:35.064084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68229 ] 00:07:12.941 [2024-11-28 07:16:35.203976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.199 [2024-11-28 07:16:35.303404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.199 [2024-11-28 07:16:35.360036] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.199 [2024-11-28 07:16:35.440595] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:13.457 A filename is required. 00:07:13.457 ************************************ 00:07:13.457 END TEST accel_missing_filename 00:07:13.457 ************************************ 00:07:13.457 07:16:35 -- common/autotest_common.sh@653 -- # es=234 00:07:13.457 07:16:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.457 07:16:35 -- common/autotest_common.sh@662 -- # es=106 00:07:13.457 07:16:35 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:13.457 07:16:35 -- common/autotest_common.sh@670 -- # es=1 00:07:13.457 07:16:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.457 00:07:13.457 real 0m0.488s 00:07:13.457 user 0m0.320s 00:07:13.457 sys 0m0.117s 00:07:13.457 07:16:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.457 07:16:35 -- common/autotest_common.sh@10 -- # set +x 00:07:13.457 07:16:35 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.457 07:16:35 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:13.457 07:16:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.457 07:16:35 -- common/autotest_common.sh@10 -- # set +x 00:07:13.457 ************************************ 00:07:13.457 START TEST accel_compress_verify 00:07:13.457 ************************************ 00:07:13.457 07:16:35 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.457 07:16:35 -- common/autotest_common.sh@650 -- # local es=0 00:07:13.457 07:16:35 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.457 07:16:35 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:13.457 07:16:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.457 07:16:35 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:13.457 07:16:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.457 07:16:35 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.457 07:16:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:13.457 07:16:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.457 07:16:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.457 07:16:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.457 07:16:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.457 07:16:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.457 07:16:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.457 07:16:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.457 07:16:35 -- accel/accel.sh@42 -- # jq -r . 00:07:13.457 [2024-11-28 07:16:35.604380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:13.457 [2024-11-28 07:16:35.604510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68249 ] 00:07:13.715 [2024-11-28 07:16:35.745176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.715 [2024-11-28 07:16:35.850211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.715 [2024-11-28 07:16:35.925265] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:13.974 [2024-11-28 07:16:36.013670] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:13.974 00:07:13.974 Compression does not support the verify option, aborting. 00:07:13.974 07:16:36 -- common/autotest_common.sh@653 -- # es=161 00:07:13.974 07:16:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.974 07:16:36 -- common/autotest_common.sh@662 -- # es=33 00:07:13.974 07:16:36 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:13.974 07:16:36 -- common/autotest_common.sh@670 -- # es=1 00:07:13.974 07:16:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.974 00:07:13.974 real 0m0.524s 00:07:13.974 user 0m0.326s 00:07:13.974 sys 0m0.144s 00:07:13.974 07:16:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.974 ************************************ 00:07:13.974 END TEST accel_compress_verify 00:07:13.974 ************************************ 00:07:13.974 07:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 07:16:36 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:13.974 07:16:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:13.974 07:16:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.974 07:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 ************************************ 00:07:13.974 START TEST accel_wrong_workload 00:07:13.974 ************************************ 00:07:13.974 07:16:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:07:13.974 07:16:36 -- common/autotest_common.sh@650 -- # local es=0 00:07:13.974 07:16:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:13.974 07:16:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:13.974 07:16:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.974 07:16:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:13.974 07:16:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.974 07:16:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:13.974 07:16:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:13.974 07:16:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.974 07:16:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.974 07:16:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.974 07:16:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.974 07:16:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.974 07:16:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.974 07:16:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.974 07:16:36 -- accel/accel.sh@42 -- # jq -r . 00:07:13.974 Unsupported workload type: foobar 00:07:13.974 [2024-11-28 07:16:36.179390] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:13.974 accel_perf options: 00:07:13.974 [-h help message] 00:07:13.974 [-q queue depth per core] 00:07:13.974 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:13.974 [-T number of threads per core 00:07:13.974 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:13.974 [-t time in seconds] 00:07:13.974 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:13.974 [ dif_verify, , dif_generate, dif_generate_copy 00:07:13.974 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:13.974 [-l for compress/decompress workloads, name of uncompressed input file 00:07:13.974 [-S for crc32c workload, use this seed value (default 0) 00:07:13.974 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:13.974 [-f for fill workload, use this BYTE value (default 255) 00:07:13.974 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:13.974 [-y verify result if this switch is on] 00:07:13.974 [-a tasks to allocate per core (default: same value as -q)] 00:07:13.974 Can be used to spread operations across a wider range of memory. 00:07:13.974 07:16:36 -- common/autotest_common.sh@653 -- # es=1 00:07:13.974 07:16:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.974 07:16:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.974 ************************************ 00:07:13.974 END TEST accel_wrong_workload 00:07:13.974 ************************************ 00:07:13.974 07:16:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.974 00:07:13.974 real 0m0.033s 00:07:13.974 user 0m0.022s 00:07:13.974 sys 0m0.011s 00:07:13.974 07:16:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.974 07:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 07:16:36 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:13.974 07:16:36 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:13.974 07:16:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.974 07:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 ************************************ 00:07:13.974 START TEST accel_negative_buffers 00:07:13.974 ************************************ 00:07:13.974 07:16:36 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:13.974 07:16:36 -- common/autotest_common.sh@650 -- # local es=0 00:07:13.974 07:16:36 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:13.974 07:16:36 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:13.974 07:16:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.974 07:16:36 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:13.974 07:16:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.974 07:16:36 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:13.974 07:16:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:14.233 07:16:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.233 07:16:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.233 07:16:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.233 07:16:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.233 07:16:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.233 07:16:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.233 07:16:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.233 07:16:36 -- accel/accel.sh@42 -- # jq -r . 00:07:14.233 -x option must be non-negative. 00:07:14.233 [2024-11-28 07:16:36.268898] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:14.233 accel_perf options: 00:07:14.233 [-h help message] 00:07:14.233 [-q queue depth per core] 00:07:14.233 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:14.233 [-T number of threads per core 00:07:14.233 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:14.233 [-t time in seconds] 00:07:14.233 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:14.233 [ dif_verify, , dif_generate, dif_generate_copy 00:07:14.233 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:14.233 [-l for compress/decompress workloads, name of uncompressed input file 00:07:14.233 [-S for crc32c workload, use this seed value (default 0) 00:07:14.233 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:14.233 [-f for fill workload, use this BYTE value (default 255) 00:07:14.233 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:14.233 [-y verify result if this switch is on] 00:07:14.233 [-a tasks to allocate per core (default: same value as -q)] 00:07:14.233 Can be used to spread operations across a wider range of memory. 00:07:14.233 07:16:36 -- common/autotest_common.sh@653 -- # es=1 00:07:14.233 07:16:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.233 07:16:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.233 07:16:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.233 00:07:14.233 real 0m0.036s 00:07:14.233 user 0m0.019s 00:07:14.233 sys 0m0.015s 00:07:14.233 07:16:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.233 07:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:14.233 ************************************ 00:07:14.233 END TEST accel_negative_buffers 00:07:14.233 ************************************ 00:07:14.233 07:16:36 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:14.233 07:16:36 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:14.233 07:16:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.233 07:16:36 -- common/autotest_common.sh@10 -- # set +x 00:07:14.233 ************************************ 00:07:14.233 START TEST accel_crc32c 00:07:14.233 ************************************ 00:07:14.233 07:16:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:14.233 07:16:36 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.233 07:16:36 -- accel/accel.sh@17 -- # local accel_module 00:07:14.233 07:16:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:14.233 07:16:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:14.233 07:16:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.233 07:16:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.233 07:16:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.233 07:16:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.233 07:16:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.233 07:16:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.233 07:16:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.233 07:16:36 -- accel/accel.sh@42 -- # jq -r . 00:07:14.233 [2024-11-28 07:16:36.357644] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:14.233 [2024-11-28 07:16:36.358899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68313 ] 00:07:14.233 [2024-11-28 07:16:36.502174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.492 [2024-11-28 07:16:36.601732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.869 07:16:37 -- accel/accel.sh@18 -- # out=' 00:07:15.869 SPDK Configuration: 00:07:15.869 Core mask: 0x1 00:07:15.869 00:07:15.869 Accel Perf Configuration: 00:07:15.869 Workload Type: crc32c 00:07:15.869 CRC-32C seed: 32 00:07:15.869 Transfer size: 4096 bytes 00:07:15.869 Vector count 1 00:07:15.869 Module: software 00:07:15.869 Queue depth: 32 00:07:15.869 Allocate depth: 32 00:07:15.869 # threads/core: 1 00:07:15.869 Run time: 1 seconds 00:07:15.869 Verify: Yes 00:07:15.869 00:07:15.869 Running for 1 seconds... 00:07:15.869 00:07:15.869 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.869 ------------------------------------------------------------------------------------ 00:07:15.869 0,0 444864/s 1737 MiB/s 0 0 00:07:15.869 ==================================================================================== 00:07:15.869 Total 444864/s 1737 MiB/s 0 0' 00:07:15.869 07:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:15.869 07:16:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:15.869 07:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:15.869 07:16:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.869 07:16:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:15.869 07:16:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.869 07:16:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.869 07:16:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.869 07:16:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.869 07:16:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.869 07:16:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.869 07:16:37 -- accel/accel.sh@42 -- # jq -r . 00:07:15.869 [2024-11-28 07:16:37.871381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.869 [2024-11-28 07:16:37.871872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68333 ] 00:07:15.869 [2024-11-28 07:16:38.009509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.869 [2024-11-28 07:16:38.113223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val= 00:07:16.128 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val= 00:07:16.128 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val=0x1 00:07:16.128 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val= 00:07:16.128 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val= 00:07:16.128 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val=crc32c 00:07:16.128 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.128 07:16:38 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val=32 00:07:16.128 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.128 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val= 00:07:16.128 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val=software 00:07:16.128 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.128 07:16:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val=32 00:07:16.128 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.128 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.128 07:16:38 -- accel/accel.sh@21 -- # val=32 00:07:16.129 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.129 07:16:38 -- accel/accel.sh@21 -- # val=1 00:07:16.129 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.129 07:16:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.129 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.129 07:16:38 -- accel/accel.sh@21 -- # val=Yes 00:07:16.129 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.129 07:16:38 -- accel/accel.sh@21 -- # val= 00:07:16.129 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:16.129 07:16:38 -- accel/accel.sh@21 -- # val= 00:07:16.129 07:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:16.129 07:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 07:16:39 -- accel/accel.sh@21 -- # val= 00:07:17.505 07:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 07:16:39 -- accel/accel.sh@21 -- # val= 00:07:17.505 07:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 07:16:39 -- accel/accel.sh@21 -- # val= 00:07:17.505 07:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 07:16:39 -- accel/accel.sh@21 -- # val= 00:07:17.505 07:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 07:16:39 -- accel/accel.sh@21 -- # val= 00:07:17.505 07:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 07:16:39 -- accel/accel.sh@21 -- # val= 00:07:17.505 07:16:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # IFS=: 00:07:17.505 07:16:39 -- accel/accel.sh@20 -- # read -r var val 00:07:17.505 07:16:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.505 07:16:39 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:17.505 07:16:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.505 00:07:17.505 real 0m3.022s 00:07:17.505 user 0m2.562s 00:07:17.505 sys 0m0.247s 00:07:17.505 07:16:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:17.505 07:16:39 -- common/autotest_common.sh@10 -- # set +x 00:07:17.505 ************************************ 00:07:17.505 END TEST accel_crc32c 00:07:17.505 ************************************ 00:07:17.505 07:16:39 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:17.505 07:16:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:17.505 07:16:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.505 07:16:39 -- common/autotest_common.sh@10 -- # set +x 00:07:17.505 ************************************ 00:07:17.505 START TEST accel_crc32c_C2 00:07:17.505 ************************************ 00:07:17.505 07:16:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:17.505 07:16:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.505 07:16:39 -- accel/accel.sh@17 -- # local accel_module 00:07:17.505 07:16:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:17.505 07:16:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:17.505 07:16:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.505 07:16:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.505 07:16:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.505 07:16:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.505 07:16:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.505 07:16:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.505 07:16:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.505 07:16:39 -- accel/accel.sh@42 -- # jq -r . 00:07:17.505 [2024-11-28 07:16:39.432490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.505 [2024-11-28 07:16:39.432639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68367 ] 00:07:17.505 [2024-11-28 07:16:39.572496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.505 [2024-11-28 07:16:39.679515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.881 07:16:40 -- accel/accel.sh@18 -- # out=' 00:07:18.881 SPDK Configuration: 00:07:18.881 Core mask: 0x1 00:07:18.881 00:07:18.881 Accel Perf Configuration: 00:07:18.881 Workload Type: crc32c 00:07:18.881 CRC-32C seed: 0 00:07:18.881 Transfer size: 4096 bytes 00:07:18.881 Vector count 2 00:07:18.881 Module: software 00:07:18.881 Queue depth: 32 00:07:18.881 Allocate depth: 32 00:07:18.881 # threads/core: 1 00:07:18.881 Run time: 1 seconds 00:07:18.881 Verify: Yes 00:07:18.881 00:07:18.881 Running for 1 seconds... 00:07:18.881 00:07:18.881 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.881 ------------------------------------------------------------------------------------ 00:07:18.881 0,0 346272/s 2705 MiB/s 0 0 00:07:18.881 ==================================================================================== 00:07:18.881 Total 346272/s 1352 MiB/s 0 0' 00:07:18.881 07:16:40 -- accel/accel.sh@20 -- # IFS=: 00:07:18.881 07:16:40 -- accel/accel.sh@20 -- # read -r var val 00:07:18.881 07:16:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:18.881 07:16:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:18.881 07:16:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.881 07:16:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.881 07:16:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.881 07:16:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.881 07:16:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.881 07:16:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.881 07:16:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.881 07:16:40 -- accel/accel.sh@42 -- # jq -r . 00:07:18.881 [2024-11-28 07:16:40.962050] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:18.881 [2024-11-28 07:16:40.962420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68387 ] 00:07:18.881 [2024-11-28 07:16:41.099696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.140 [2024-11-28 07:16:41.208441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val= 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val= 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val=0x1 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val= 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val= 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val=crc32c 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val=0 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val= 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val=software 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val=32 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val=32 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val=1 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val=Yes 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val= 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:19.140 07:16:41 -- accel/accel.sh@21 -- # val= 00:07:19.140 07:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:19.140 07:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:20.539 07:16:42 -- accel/accel.sh@21 -- # val= 00:07:20.539 07:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.539 07:16:42 -- accel/accel.sh@21 -- # val= 00:07:20.539 07:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.539 07:16:42 -- accel/accel.sh@21 -- # val= 00:07:20.539 07:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.539 07:16:42 -- accel/accel.sh@21 -- # val= 00:07:20.539 07:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.539 07:16:42 -- accel/accel.sh@21 -- # val= 00:07:20.539 07:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.539 07:16:42 -- accel/accel.sh@21 -- # val= 00:07:20.539 07:16:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # IFS=: 00:07:20.539 07:16:42 -- accel/accel.sh@20 -- # read -r var val 00:07:20.539 07:16:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.539 07:16:42 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:20.539 07:16:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.539 00:07:20.539 real 0m3.082s 00:07:20.539 user 0m2.596s 00:07:20.539 sys 0m0.275s 00:07:20.539 07:16:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:20.539 ************************************ 00:07:20.539 END TEST accel_crc32c_C2 00:07:20.539 ************************************ 00:07:20.539 07:16:42 -- common/autotest_common.sh@10 -- # set +x 00:07:20.539 07:16:42 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:20.539 07:16:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:20.539 07:16:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.539 07:16:42 -- common/autotest_common.sh@10 -- # set +x 00:07:20.539 ************************************ 00:07:20.539 START TEST accel_copy 00:07:20.539 ************************************ 00:07:20.539 07:16:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:20.539 07:16:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.539 07:16:42 -- accel/accel.sh@17 -- # local accel_module 00:07:20.539 07:16:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:20.539 07:16:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:20.539 07:16:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.539 07:16:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.539 07:16:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.539 07:16:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.539 07:16:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.539 07:16:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.539 07:16:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.539 07:16:42 -- accel/accel.sh@42 -- # jq -r . 00:07:20.539 [2024-11-28 07:16:42.565598] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.539 [2024-11-28 07:16:42.565713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68421 ] 00:07:20.539 [2024-11-28 07:16:42.703434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.539 [2024-11-28 07:16:42.809506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.915 07:16:44 -- accel/accel.sh@18 -- # out=' 00:07:21.915 SPDK Configuration: 00:07:21.915 Core mask: 0x1 00:07:21.915 00:07:21.915 Accel Perf Configuration: 00:07:21.915 Workload Type: copy 00:07:21.915 Transfer size: 4096 bytes 00:07:21.915 Vector count 1 00:07:21.915 Module: software 00:07:21.915 Queue depth: 32 00:07:21.915 Allocate depth: 32 00:07:21.915 # threads/core: 1 00:07:21.915 Run time: 1 seconds 00:07:21.915 Verify: Yes 00:07:21.915 00:07:21.915 Running for 1 seconds... 00:07:21.915 00:07:21.915 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.915 ------------------------------------------------------------------------------------ 00:07:21.915 0,0 307744/s 1202 MiB/s 0 0 00:07:21.915 ==================================================================================== 00:07:21.915 Total 307744/s 1202 MiB/s 0 0' 00:07:21.915 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:21.915 07:16:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:21.915 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:21.915 07:16:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:21.915 07:16:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.915 07:16:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.915 07:16:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.915 07:16:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.915 07:16:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.915 07:16:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.915 07:16:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.915 07:16:44 -- accel/accel.sh@42 -- # jq -r . 00:07:21.915 [2024-11-28 07:16:44.087708] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.915 [2024-11-28 07:16:44.087873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68441 ] 00:07:22.174 [2024-11-28 07:16:44.229073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.174 [2024-11-28 07:16:44.337563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val= 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val= 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val=0x1 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val= 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val= 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val=copy 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val= 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val=software 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val=32 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val=32 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val=1 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val=Yes 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val= 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:22.174 07:16:44 -- accel/accel.sh@21 -- # val= 00:07:22.174 07:16:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # IFS=: 00:07:22.174 07:16:44 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 07:16:45 -- accel/accel.sh@21 -- # val= 00:07:23.549 07:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 07:16:45 -- accel/accel.sh@21 -- # val= 00:07:23.549 07:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 07:16:45 -- accel/accel.sh@21 -- # val= 00:07:23.549 07:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 07:16:45 -- accel/accel.sh@21 -- # val= 00:07:23.549 07:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 07:16:45 -- accel/accel.sh@21 -- # val= 00:07:23.549 07:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 07:16:45 -- accel/accel.sh@21 -- # val= 00:07:23.549 07:16:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:23.549 07:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:23.549 07:16:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.549 07:16:45 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:23.549 07:16:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.549 00:07:23.549 real 0m3.052s 00:07:23.549 user 0m2.582s 00:07:23.549 sys 0m0.264s 00:07:23.549 07:16:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.550 07:16:45 -- common/autotest_common.sh@10 -- # set +x 00:07:23.550 ************************************ 00:07:23.550 END TEST accel_copy 00:07:23.550 ************************************ 00:07:23.550 07:16:45 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:23.550 07:16:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:23.550 07:16:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.550 07:16:45 -- common/autotest_common.sh@10 -- # set +x 00:07:23.550 ************************************ 00:07:23.550 START TEST accel_fill 00:07:23.550 ************************************ 00:07:23.550 07:16:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:23.550 07:16:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.550 07:16:45 -- accel/accel.sh@17 -- # local accel_module 00:07:23.550 07:16:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:23.550 07:16:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:23.550 07:16:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.550 07:16:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.550 07:16:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.550 07:16:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.550 07:16:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.550 07:16:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.550 07:16:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.550 07:16:45 -- accel/accel.sh@42 -- # jq -r . 00:07:23.550 [2024-11-28 07:16:45.673322] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:23.550 [2024-11-28 07:16:45.673455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68475 ] 00:07:23.550 [2024-11-28 07:16:45.815920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.808 [2024-11-28 07:16:45.934233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.182 07:16:47 -- accel/accel.sh@18 -- # out=' 00:07:25.182 SPDK Configuration: 00:07:25.182 Core mask: 0x1 00:07:25.182 00:07:25.182 Accel Perf Configuration: 00:07:25.182 Workload Type: fill 00:07:25.182 Fill pattern: 0x80 00:07:25.182 Transfer size: 4096 bytes 00:07:25.182 Vector count 1 00:07:25.182 Module: software 00:07:25.182 Queue depth: 64 00:07:25.182 Allocate depth: 64 00:07:25.182 # threads/core: 1 00:07:25.182 Run time: 1 seconds 00:07:25.182 Verify: Yes 00:07:25.182 00:07:25.182 Running for 1 seconds... 00:07:25.182 00:07:25.182 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.182 ------------------------------------------------------------------------------------ 00:07:25.182 0,0 458560/s 1791 MiB/s 0 0 00:07:25.182 ==================================================================================== 00:07:25.182 Total 458560/s 1791 MiB/s 0 0' 00:07:25.182 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.182 07:16:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.182 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.182 07:16:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.182 07:16:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.182 07:16:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.182 07:16:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.182 07:16:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.182 07:16:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.182 07:16:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.182 07:16:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.182 07:16:47 -- accel/accel.sh@42 -- # jq -r . 00:07:25.182 [2024-11-28 07:16:47.203353] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.182 [2024-11-28 07:16:47.203501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68495 ] 00:07:25.182 [2024-11-28 07:16:47.348357] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.182 [2024-11-28 07:16:47.449115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val= 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val= 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val=0x1 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val= 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val= 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val=fill 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val=0x80 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val= 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val=software 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val=64 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val=64 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val=1 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val=Yes 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val= 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:25.441 07:16:47 -- accel/accel.sh@21 -- # val= 00:07:25.441 07:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:25.441 07:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:26.816 07:16:48 -- accel/accel.sh@21 -- # val= 00:07:26.816 07:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.816 07:16:48 -- accel/accel.sh@21 -- # val= 00:07:26.816 07:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.816 07:16:48 -- accel/accel.sh@21 -- # val= 00:07:26.816 07:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.816 07:16:48 -- accel/accel.sh@21 -- # val= 00:07:26.816 07:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.816 07:16:48 -- accel/accel.sh@21 -- # val= 00:07:26.816 07:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.816 07:16:48 -- accel/accel.sh@21 -- # val= 00:07:26.816 07:16:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # IFS=: 00:07:26.816 07:16:48 -- accel/accel.sh@20 -- # read -r var val 00:07:26.816 07:16:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.816 07:16:48 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:26.816 07:16:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.816 00:07:26.816 real 0m3.053s 00:07:26.816 user 0m2.591s 00:07:26.816 sys 0m0.253s 00:07:26.816 07:16:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.816 ************************************ 00:07:26.816 END TEST accel_fill 00:07:26.816 ************************************ 00:07:26.816 07:16:48 -- common/autotest_common.sh@10 -- # set +x 00:07:26.816 07:16:48 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:26.816 07:16:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:26.816 07:16:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.816 07:16:48 -- common/autotest_common.sh@10 -- # set +x 00:07:26.816 ************************************ 00:07:26.816 START TEST accel_copy_crc32c 00:07:26.816 ************************************ 00:07:26.816 07:16:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:26.816 07:16:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.816 07:16:48 -- accel/accel.sh@17 -- # local accel_module 00:07:26.817 07:16:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:26.817 07:16:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:26.817 07:16:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.817 07:16:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.817 07:16:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.817 07:16:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.817 07:16:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.817 07:16:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.817 07:16:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.817 07:16:48 -- accel/accel.sh@42 -- # jq -r . 00:07:26.817 [2024-11-28 07:16:48.775228] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:26.817 [2024-11-28 07:16:48.775414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68529 ] 00:07:26.817 [2024-11-28 07:16:48.914774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.817 [2024-11-28 07:16:49.037418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.194 07:16:50 -- accel/accel.sh@18 -- # out=' 00:07:28.194 SPDK Configuration: 00:07:28.194 Core mask: 0x1 00:07:28.194 00:07:28.194 Accel Perf Configuration: 00:07:28.194 Workload Type: copy_crc32c 00:07:28.194 CRC-32C seed: 0 00:07:28.194 Vector size: 4096 bytes 00:07:28.194 Transfer size: 4096 bytes 00:07:28.194 Vector count 1 00:07:28.194 Module: software 00:07:28.194 Queue depth: 32 00:07:28.194 Allocate depth: 32 00:07:28.194 # threads/core: 1 00:07:28.194 Run time: 1 seconds 00:07:28.194 Verify: Yes 00:07:28.194 00:07:28.194 Running for 1 seconds... 00:07:28.194 00:07:28.194 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.194 ------------------------------------------------------------------------------------ 00:07:28.194 0,0 246496/s 962 MiB/s 0 0 00:07:28.194 ==================================================================================== 00:07:28.194 Total 246496/s 962 MiB/s 0 0' 00:07:28.194 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.194 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.194 07:16:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:28.194 07:16:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:28.194 07:16:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.194 07:16:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.194 07:16:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.194 07:16:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.194 07:16:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.194 07:16:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.194 07:16:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.194 07:16:50 -- accel/accel.sh@42 -- # jq -r . 00:07:28.194 [2024-11-28 07:16:50.313952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.194 [2024-11-28 07:16:50.314139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68549 ] 00:07:28.194 [2024-11-28 07:16:50.458890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.453 [2024-11-28 07:16:50.567993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val= 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val= 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val=0x1 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val= 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val= 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val=0 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val= 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val=software 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val=32 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val=32 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val=1 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val=Yes 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val= 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:28.453 07:16:50 -- accel/accel.sh@21 -- # val= 00:07:28.453 07:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:28.453 07:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:29.829 07:16:51 -- accel/accel.sh@21 -- # val= 00:07:29.829 07:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:29.829 07:16:51 -- accel/accel.sh@21 -- # val= 00:07:29.829 07:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:29.829 07:16:51 -- accel/accel.sh@21 -- # val= 00:07:29.829 07:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:29.829 07:16:51 -- accel/accel.sh@21 -- # val= 00:07:29.829 07:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:29.829 07:16:51 -- accel/accel.sh@21 -- # val= 00:07:29.829 07:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:29.829 07:16:51 -- accel/accel.sh@21 -- # val= 00:07:29.829 07:16:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # IFS=: 00:07:29.829 07:16:51 -- accel/accel.sh@20 -- # read -r var val 00:07:29.829 07:16:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.829 07:16:51 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:29.829 07:16:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.829 00:07:29.829 real 0m3.107s 00:07:29.829 user 0m2.604s 00:07:29.829 sys 0m0.292s 00:07:29.829 ************************************ 00:07:29.829 END TEST accel_copy_crc32c 00:07:29.829 ************************************ 00:07:29.829 07:16:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.829 07:16:51 -- common/autotest_common.sh@10 -- # set +x 00:07:29.829 07:16:51 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:29.829 07:16:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:29.829 07:16:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.829 07:16:51 -- common/autotest_common.sh@10 -- # set +x 00:07:29.829 ************************************ 00:07:29.829 START TEST accel_copy_crc32c_C2 00:07:29.829 ************************************ 00:07:29.829 07:16:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:29.829 07:16:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.829 07:16:51 -- accel/accel.sh@17 -- # local accel_module 00:07:29.829 07:16:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:29.829 07:16:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:29.829 07:16:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.829 07:16:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.829 07:16:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.829 07:16:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.829 07:16:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.829 07:16:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.829 07:16:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.829 07:16:51 -- accel/accel.sh@42 -- # jq -r . 00:07:29.829 [2024-11-28 07:16:51.941013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:29.829 [2024-11-28 07:16:51.941567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68583 ] 00:07:29.829 [2024-11-28 07:16:52.081654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.087 [2024-11-28 07:16:52.185683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.465 07:16:53 -- accel/accel.sh@18 -- # out=' 00:07:31.465 SPDK Configuration: 00:07:31.465 Core mask: 0x1 00:07:31.465 00:07:31.465 Accel Perf Configuration: 00:07:31.465 Workload Type: copy_crc32c 00:07:31.465 CRC-32C seed: 0 00:07:31.465 Vector size: 4096 bytes 00:07:31.465 Transfer size: 8192 bytes 00:07:31.465 Vector count 2 00:07:31.465 Module: software 00:07:31.465 Queue depth: 32 00:07:31.465 Allocate depth: 32 00:07:31.465 # threads/core: 1 00:07:31.465 Run time: 1 seconds 00:07:31.465 Verify: Yes 00:07:31.465 00:07:31.465 Running for 1 seconds... 00:07:31.465 00:07:31.465 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.465 ------------------------------------------------------------------------------------ 00:07:31.465 0,0 176672/s 1380 MiB/s 0 0 00:07:31.465 ==================================================================================== 00:07:31.465 Total 176672/s 690 MiB/s 0 0' 00:07:31.465 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.465 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.465 07:16:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:31.465 07:16:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.465 07:16:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:31.465 07:16:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.465 07:16:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.465 07:16:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.465 07:16:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.465 07:16:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.465 07:16:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.465 07:16:53 -- accel/accel.sh@42 -- # jq -r . 00:07:31.465 [2024-11-28 07:16:53.477611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:31.465 [2024-11-28 07:16:53.478093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68603 ] 00:07:31.465 [2024-11-28 07:16:53.612931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.465 [2024-11-28 07:16:53.718292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.723 07:16:53 -- accel/accel.sh@21 -- # val= 00:07:31.723 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.723 07:16:53 -- accel/accel.sh@21 -- # val= 00:07:31.723 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.723 07:16:53 -- accel/accel.sh@21 -- # val=0x1 00:07:31.723 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.723 07:16:53 -- accel/accel.sh@21 -- # val= 00:07:31.723 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.723 07:16:53 -- accel/accel.sh@21 -- # val= 00:07:31.723 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.723 07:16:53 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:31.723 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.723 07:16:53 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.723 07:16:53 -- accel/accel.sh@21 -- # val=0 00:07:31.723 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.723 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.724 07:16:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.724 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.724 07:16:53 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:31.724 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.724 07:16:53 -- accel/accel.sh@21 -- # val= 00:07:31.724 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.724 07:16:53 -- accel/accel.sh@21 -- # val=software 00:07:31.724 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.724 07:16:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.724 07:16:53 -- accel/accel.sh@21 -- # val=32 00:07:31.724 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.724 07:16:53 -- accel/accel.sh@21 -- # val=32 00:07:31.724 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.724 07:16:53 -- accel/accel.sh@21 -- # val=1 00:07:31.724 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.724 07:16:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.724 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.724 07:16:53 -- accel/accel.sh@21 -- # val=Yes 00:07:31.724 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.724 07:16:53 -- accel/accel.sh@21 -- # val= 00:07:31.724 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:31.724 07:16:53 -- accel/accel.sh@21 -- # val= 00:07:31.724 07:16:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # IFS=: 00:07:31.724 07:16:53 -- accel/accel.sh@20 -- # read -r var val 00:07:33.097 07:16:54 -- accel/accel.sh@21 -- # val= 00:07:33.097 07:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:33.097 07:16:54 -- accel/accel.sh@21 -- # val= 00:07:33.097 07:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:33.097 07:16:54 -- accel/accel.sh@21 -- # val= 00:07:33.097 07:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:33.097 07:16:54 -- accel/accel.sh@21 -- # val= 00:07:33.097 07:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:33.097 07:16:54 -- accel/accel.sh@21 -- # val= 00:07:33.097 07:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:33.097 07:16:54 -- accel/accel.sh@21 -- # val= 00:07:33.097 07:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:33.097 07:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:33.097 07:16:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.097 07:16:54 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:33.097 07:16:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.097 00:07:33.097 real 0m3.068s 00:07:33.097 user 0m2.591s 00:07:33.097 sys 0m0.265s 00:07:33.097 ************************************ 00:07:33.097 END TEST accel_copy_crc32c_C2 00:07:33.097 ************************************ 00:07:33.097 07:16:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:33.097 07:16:54 -- common/autotest_common.sh@10 -- # set +x 00:07:33.097 07:16:55 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:33.097 07:16:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:33.097 07:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.097 07:16:55 -- common/autotest_common.sh@10 -- # set +x 00:07:33.097 ************************************ 00:07:33.097 START TEST accel_dualcast 00:07:33.097 ************************************ 00:07:33.097 07:16:55 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:33.097 07:16:55 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.097 07:16:55 -- accel/accel.sh@17 -- # local accel_module 00:07:33.097 07:16:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:33.097 07:16:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:33.097 07:16:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.097 07:16:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.097 07:16:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.097 07:16:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.097 07:16:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.097 07:16:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.097 07:16:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.097 07:16:55 -- accel/accel.sh@42 -- # jq -r . 00:07:33.097 [2024-11-28 07:16:55.061186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.097 [2024-11-28 07:16:55.061992] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68642 ] 00:07:33.097 [2024-11-28 07:16:55.203639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.097 [2024-11-28 07:16:55.313247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.491 07:16:56 -- accel/accel.sh@18 -- # out=' 00:07:34.491 SPDK Configuration: 00:07:34.491 Core mask: 0x1 00:07:34.491 00:07:34.491 Accel Perf Configuration: 00:07:34.491 Workload Type: dualcast 00:07:34.491 Transfer size: 4096 bytes 00:07:34.491 Vector count 1 00:07:34.491 Module: software 00:07:34.491 Queue depth: 32 00:07:34.491 Allocate depth: 32 00:07:34.491 # threads/core: 1 00:07:34.491 Run time: 1 seconds 00:07:34.491 Verify: Yes 00:07:34.491 00:07:34.491 Running for 1 seconds... 00:07:34.491 00:07:34.491 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.491 ------------------------------------------------------------------------------------ 00:07:34.491 0,0 341344/s 1333 MiB/s 0 0 00:07:34.491 ==================================================================================== 00:07:34.491 Total 341344/s 1333 MiB/s 0 0' 00:07:34.491 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.491 07:16:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:34.491 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.491 07:16:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:34.491 07:16:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.491 07:16:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.491 07:16:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.491 07:16:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.491 07:16:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.491 07:16:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.491 07:16:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.491 07:16:56 -- accel/accel.sh@42 -- # jq -r . 00:07:34.491 [2024-11-28 07:16:56.594259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.491 [2024-11-28 07:16:56.594466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68657 ] 00:07:34.491 [2024-11-28 07:16:56.739472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.750 [2024-11-28 07:16:56.843099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val= 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val= 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val=0x1 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val= 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val= 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val=dualcast 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val= 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val=software 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val=32 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val=32 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val=1 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val=Yes 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val= 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:34.750 07:16:56 -- accel/accel.sh@21 -- # val= 00:07:34.750 07:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:34.750 07:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:36.127 07:16:58 -- accel/accel.sh@21 -- # val= 00:07:36.127 07:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.127 07:16:58 -- accel/accel.sh@21 -- # val= 00:07:36.127 07:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.127 07:16:58 -- accel/accel.sh@21 -- # val= 00:07:36.127 07:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.127 07:16:58 -- accel/accel.sh@21 -- # val= 00:07:36.127 07:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.127 07:16:58 -- accel/accel.sh@21 -- # val= 00:07:36.127 07:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.127 07:16:58 -- accel/accel.sh@21 -- # val= 00:07:36.127 07:16:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # IFS=: 00:07:36.127 ************************************ 00:07:36.127 END TEST accel_dualcast 00:07:36.127 ************************************ 00:07:36.127 07:16:58 -- accel/accel.sh@20 -- # read -r var val 00:07:36.127 07:16:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:36.127 07:16:58 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:36.127 07:16:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.127 00:07:36.128 real 0m3.054s 00:07:36.128 user 0m2.565s 00:07:36.128 sys 0m0.273s 00:07:36.128 07:16:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.128 07:16:58 -- common/autotest_common.sh@10 -- # set +x 00:07:36.128 07:16:58 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:36.128 07:16:58 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:36.128 07:16:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.128 07:16:58 -- common/autotest_common.sh@10 -- # set +x 00:07:36.128 ************************************ 00:07:36.128 START TEST accel_compare 00:07:36.128 ************************************ 00:07:36.128 07:16:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:36.128 07:16:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.128 07:16:58 -- accel/accel.sh@17 -- # local accel_module 00:07:36.128 07:16:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:36.128 07:16:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:36.128 07:16:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.128 07:16:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.128 07:16:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.128 07:16:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.128 07:16:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.128 07:16:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.128 07:16:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.128 07:16:58 -- accel/accel.sh@42 -- # jq -r . 00:07:36.128 [2024-11-28 07:16:58.170345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.128 [2024-11-28 07:16:58.170749] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68692 ] 00:07:36.128 [2024-11-28 07:16:58.312594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.386 [2024-11-28 07:16:58.417600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.762 07:16:59 -- accel/accel.sh@18 -- # out=' 00:07:37.762 SPDK Configuration: 00:07:37.762 Core mask: 0x1 00:07:37.762 00:07:37.762 Accel Perf Configuration: 00:07:37.762 Workload Type: compare 00:07:37.762 Transfer size: 4096 bytes 00:07:37.762 Vector count 1 00:07:37.762 Module: software 00:07:37.762 Queue depth: 32 00:07:37.762 Allocate depth: 32 00:07:37.762 # threads/core: 1 00:07:37.762 Run time: 1 seconds 00:07:37.762 Verify: Yes 00:07:37.762 00:07:37.762 Running for 1 seconds... 00:07:37.762 00:07:37.762 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.762 ------------------------------------------------------------------------------------ 00:07:37.762 0,0 434944/s 1699 MiB/s 0 0 00:07:37.762 ==================================================================================== 00:07:37.762 Total 434944/s 1699 MiB/s 0 0' 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:37.762 07:16:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.762 07:16:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.762 07:16:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.762 07:16:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.762 07:16:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.762 07:16:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.762 07:16:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.762 07:16:59 -- accel/accel.sh@42 -- # jq -r . 00:07:37.762 [2024-11-28 07:16:59.678828] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.762 [2024-11-28 07:16:59.678949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68711 ] 00:07:37.762 [2024-11-28 07:16:59.813419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.762 [2024-11-28 07:16:59.914139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val= 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val= 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val=0x1 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val= 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val= 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val=compare 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val= 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val=software 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val=32 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val=32 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val=1 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val=Yes 00:07:37.762 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.762 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.762 07:16:59 -- accel/accel.sh@21 -- # val= 00:07:37.763 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.763 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.763 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:37.763 07:16:59 -- accel/accel.sh@21 -- # val= 00:07:37.763 07:16:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.763 07:16:59 -- accel/accel.sh@20 -- # IFS=: 00:07:37.763 07:16:59 -- accel/accel.sh@20 -- # read -r var val 00:07:39.139 07:17:01 -- accel/accel.sh@21 -- # val= 00:07:39.139 07:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # IFS=: 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # read -r var val 00:07:39.139 07:17:01 -- accel/accel.sh@21 -- # val= 00:07:39.139 07:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # IFS=: 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # read -r var val 00:07:39.139 07:17:01 -- accel/accel.sh@21 -- # val= 00:07:39.139 07:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # IFS=: 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # read -r var val 00:07:39.139 07:17:01 -- accel/accel.sh@21 -- # val= 00:07:39.139 07:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # IFS=: 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # read -r var val 00:07:39.139 07:17:01 -- accel/accel.sh@21 -- # val= 00:07:39.139 07:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # IFS=: 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # read -r var val 00:07:39.139 07:17:01 -- accel/accel.sh@21 -- # val= 00:07:39.139 07:17:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # IFS=: 00:07:39.139 07:17:01 -- accel/accel.sh@20 -- # read -r var val 00:07:39.139 07:17:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.139 ************************************ 00:07:39.139 END TEST accel_compare 00:07:39.139 ************************************ 00:07:39.139 07:17:01 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:39.139 07:17:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.139 00:07:39.139 real 0m3.015s 00:07:39.139 user 0m2.542s 00:07:39.139 sys 0m0.261s 00:07:39.139 07:17:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:39.139 07:17:01 -- common/autotest_common.sh@10 -- # set +x 00:07:39.139 07:17:01 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:39.139 07:17:01 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:39.139 07:17:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.139 07:17:01 -- common/autotest_common.sh@10 -- # set +x 00:07:39.139 ************************************ 00:07:39.139 START TEST accel_xor 00:07:39.139 ************************************ 00:07:39.139 07:17:01 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:39.139 07:17:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.139 07:17:01 -- accel/accel.sh@17 -- # local accel_module 00:07:39.139 07:17:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:39.139 07:17:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:39.139 07:17:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.139 07:17:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.139 07:17:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.139 07:17:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.139 07:17:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.139 07:17:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.139 07:17:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.139 07:17:01 -- accel/accel.sh@42 -- # jq -r . 00:07:39.139 [2024-11-28 07:17:01.231473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:39.139 [2024-11-28 07:17:01.231631] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68746 ] 00:07:39.139 [2024-11-28 07:17:01.373932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.398 [2024-11-28 07:17:01.480558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.773 07:17:02 -- accel/accel.sh@18 -- # out=' 00:07:40.773 SPDK Configuration: 00:07:40.773 Core mask: 0x1 00:07:40.773 00:07:40.773 Accel Perf Configuration: 00:07:40.773 Workload Type: xor 00:07:40.773 Source buffers: 2 00:07:40.773 Transfer size: 4096 bytes 00:07:40.773 Vector count 1 00:07:40.773 Module: software 00:07:40.773 Queue depth: 32 00:07:40.773 Allocate depth: 32 00:07:40.773 # threads/core: 1 00:07:40.773 Run time: 1 seconds 00:07:40.773 Verify: Yes 00:07:40.773 00:07:40.773 Running for 1 seconds... 00:07:40.773 00:07:40.773 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:40.773 ------------------------------------------------------------------------------------ 00:07:40.773 0,0 221024/s 863 MiB/s 0 0 00:07:40.773 ==================================================================================== 00:07:40.773 Total 221024/s 863 MiB/s 0 0' 00:07:40.773 07:17:02 -- accel/accel.sh@20 -- # IFS=: 00:07:40.773 07:17:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:40.773 07:17:02 -- accel/accel.sh@20 -- # read -r var val 00:07:40.773 07:17:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:40.773 07:17:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.773 07:17:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.773 07:17:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.773 07:17:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.773 07:17:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.773 07:17:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.773 07:17:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.773 07:17:02 -- accel/accel.sh@42 -- # jq -r . 00:07:40.773 [2024-11-28 07:17:02.756819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.773 [2024-11-28 07:17:02.756931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68765 ] 00:07:40.773 [2024-11-28 07:17:02.890352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.773 [2024-11-28 07:17:02.995579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val= 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val= 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val=0x1 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val= 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val= 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val=xor 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val=2 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val= 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val=software 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val=32 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val=32 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.031 07:17:03 -- accel/accel.sh@21 -- # val=1 00:07:41.031 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.031 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.032 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.032 07:17:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.032 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.032 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.032 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.032 07:17:03 -- accel/accel.sh@21 -- # val=Yes 00:07:41.032 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.032 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.032 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.032 07:17:03 -- accel/accel.sh@21 -- # val= 00:07:41.032 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.032 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.032 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:41.032 07:17:03 -- accel/accel.sh@21 -- # val= 00:07:41.032 07:17:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.032 07:17:03 -- accel/accel.sh@20 -- # IFS=: 00:07:41.032 07:17:03 -- accel/accel.sh@20 -- # read -r var val 00:07:42.406 07:17:04 -- accel/accel.sh@21 -- # val= 00:07:42.406 07:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # IFS=: 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # read -r var val 00:07:42.406 07:17:04 -- accel/accel.sh@21 -- # val= 00:07:42.406 07:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # IFS=: 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # read -r var val 00:07:42.406 07:17:04 -- accel/accel.sh@21 -- # val= 00:07:42.406 07:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # IFS=: 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # read -r var val 00:07:42.406 07:17:04 -- accel/accel.sh@21 -- # val= 00:07:42.406 07:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # IFS=: 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # read -r var val 00:07:42.406 07:17:04 -- accel/accel.sh@21 -- # val= 00:07:42.406 07:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # IFS=: 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # read -r var val 00:07:42.406 07:17:04 -- accel/accel.sh@21 -- # val= 00:07:42.406 07:17:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # IFS=: 00:07:42.406 07:17:04 -- accel/accel.sh@20 -- # read -r var val 00:07:42.406 07:17:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.406 07:17:04 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:42.406 07:17:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.406 00:07:42.406 real 0m3.047s 00:07:42.406 user 0m2.583s 00:07:42.406 sys 0m0.254s 00:07:42.406 07:17:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.406 07:17:04 -- common/autotest_common.sh@10 -- # set +x 00:07:42.406 ************************************ 00:07:42.406 END TEST accel_xor 00:07:42.406 ************************************ 00:07:42.406 07:17:04 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:42.406 07:17:04 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:42.406 07:17:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.406 07:17:04 -- common/autotest_common.sh@10 -- # set +x 00:07:42.406 ************************************ 00:07:42.406 START TEST accel_xor 00:07:42.406 ************************************ 00:07:42.406 07:17:04 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:42.406 07:17:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.406 07:17:04 -- accel/accel.sh@17 -- # local accel_module 00:07:42.406 07:17:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:42.406 07:17:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:42.406 07:17:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.406 07:17:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.406 07:17:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.406 07:17:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.406 07:17:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.406 07:17:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.406 07:17:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.406 07:17:04 -- accel/accel.sh@42 -- # jq -r . 00:07:42.406 [2024-11-28 07:17:04.333430] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:42.406 [2024-11-28 07:17:04.333592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68800 ] 00:07:42.406 [2024-11-28 07:17:04.477566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.406 [2024-11-28 07:17:04.585151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.777 07:17:05 -- accel/accel.sh@18 -- # out=' 00:07:43.777 SPDK Configuration: 00:07:43.777 Core mask: 0x1 00:07:43.777 00:07:43.777 Accel Perf Configuration: 00:07:43.777 Workload Type: xor 00:07:43.777 Source buffers: 3 00:07:43.777 Transfer size: 4096 bytes 00:07:43.777 Vector count 1 00:07:43.777 Module: software 00:07:43.777 Queue depth: 32 00:07:43.777 Allocate depth: 32 00:07:43.777 # threads/core: 1 00:07:43.777 Run time: 1 seconds 00:07:43.777 Verify: Yes 00:07:43.777 00:07:43.777 Running for 1 seconds... 00:07:43.777 00:07:43.777 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:43.777 ------------------------------------------------------------------------------------ 00:07:43.777 0,0 209216/s 817 MiB/s 0 0 00:07:43.777 ==================================================================================== 00:07:43.778 Total 209216/s 817 MiB/s 0 0' 00:07:43.778 07:17:05 -- accel/accel.sh@20 -- # IFS=: 00:07:43.778 07:17:05 -- accel/accel.sh@20 -- # read -r var val 00:07:43.778 07:17:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:43.778 07:17:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.778 07:17:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:43.778 07:17:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.778 07:17:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.778 07:17:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.778 07:17:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.778 07:17:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.778 07:17:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.778 07:17:05 -- accel/accel.sh@42 -- # jq -r . 00:07:43.778 [2024-11-28 07:17:05.867024] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.778 [2024-11-28 07:17:05.867513] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68819 ] 00:07:43.778 [2024-11-28 07:17:06.004033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.035 [2024-11-28 07:17:06.105612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.035 07:17:06 -- accel/accel.sh@21 -- # val= 00:07:44.035 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.035 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.035 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.035 07:17:06 -- accel/accel.sh@21 -- # val= 00:07:44.035 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.035 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.035 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.035 07:17:06 -- accel/accel.sh@21 -- # val=0x1 00:07:44.035 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.035 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.035 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.035 07:17:06 -- accel/accel.sh@21 -- # val= 00:07:44.035 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.035 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.035 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.035 07:17:06 -- accel/accel.sh@21 -- # val= 00:07:44.035 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val=xor 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val=3 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val= 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val=software 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val=32 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val=32 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val=1 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val=Yes 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val= 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:44.036 07:17:06 -- accel/accel.sh@21 -- # val= 00:07:44.036 07:17:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # IFS=: 00:07:44.036 07:17:06 -- accel/accel.sh@20 -- # read -r var val 00:07:45.411 07:17:07 -- accel/accel.sh@21 -- # val= 00:07:45.411 07:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # IFS=: 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # read -r var val 00:07:45.411 07:17:07 -- accel/accel.sh@21 -- # val= 00:07:45.411 07:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # IFS=: 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # read -r var val 00:07:45.411 07:17:07 -- accel/accel.sh@21 -- # val= 00:07:45.411 07:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # IFS=: 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # read -r var val 00:07:45.411 07:17:07 -- accel/accel.sh@21 -- # val= 00:07:45.411 07:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # IFS=: 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # read -r var val 00:07:45.411 07:17:07 -- accel/accel.sh@21 -- # val= 00:07:45.411 07:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # IFS=: 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # read -r var val 00:07:45.411 07:17:07 -- accel/accel.sh@21 -- # val= 00:07:45.411 07:17:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # IFS=: 00:07:45.411 07:17:07 -- accel/accel.sh@20 -- # read -r var val 00:07:45.411 07:17:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.411 07:17:07 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:45.411 07:17:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.411 00:07:45.411 real 0m3.054s 00:07:45.411 user 0m2.570s 00:07:45.411 sys 0m0.275s 00:07:45.411 07:17:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.411 07:17:07 -- common/autotest_common.sh@10 -- # set +x 00:07:45.411 ************************************ 00:07:45.411 END TEST accel_xor 00:07:45.411 ************************************ 00:07:45.411 07:17:07 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:45.411 07:17:07 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:45.411 07:17:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.411 07:17:07 -- common/autotest_common.sh@10 -- # set +x 00:07:45.411 ************************************ 00:07:45.411 START TEST accel_dif_verify 00:07:45.411 ************************************ 00:07:45.411 07:17:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:45.411 07:17:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.411 07:17:07 -- accel/accel.sh@17 -- # local accel_module 00:07:45.411 07:17:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:45.411 07:17:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:45.411 07:17:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.411 07:17:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.411 07:17:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.411 07:17:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.411 07:17:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.411 07:17:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.411 07:17:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.411 07:17:07 -- accel/accel.sh@42 -- # jq -r . 00:07:45.411 [2024-11-28 07:17:07.438013] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.411 [2024-11-28 07:17:07.438173] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68854 ] 00:07:45.411 [2024-11-28 07:17:07.577605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.411 [2024-11-28 07:17:07.683117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.786 07:17:08 -- accel/accel.sh@18 -- # out=' 00:07:46.786 SPDK Configuration: 00:07:46.786 Core mask: 0x1 00:07:46.786 00:07:46.786 Accel Perf Configuration: 00:07:46.786 Workload Type: dif_verify 00:07:46.786 Vector size: 4096 bytes 00:07:46.786 Transfer size: 4096 bytes 00:07:46.786 Block size: 512 bytes 00:07:46.786 Metadata size: 8 bytes 00:07:46.786 Vector count 1 00:07:46.786 Module: software 00:07:46.786 Queue depth: 32 00:07:46.786 Allocate depth: 32 00:07:46.786 # threads/core: 1 00:07:46.786 Run time: 1 seconds 00:07:46.786 Verify: No 00:07:46.786 00:07:46.786 Running for 1 seconds... 00:07:46.786 00:07:46.786 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:46.786 ------------------------------------------------------------------------------------ 00:07:46.786 0,0 98016/s 388 MiB/s 0 0 00:07:46.786 ==================================================================================== 00:07:46.786 Total 98016/s 382 MiB/s 0 0' 00:07:46.786 07:17:08 -- accel/accel.sh@20 -- # IFS=: 00:07:46.786 07:17:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:46.786 07:17:08 -- accel/accel.sh@20 -- # read -r var val 00:07:46.786 07:17:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:46.786 07:17:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.786 07:17:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.786 07:17:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.786 07:17:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.786 07:17:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.786 07:17:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.786 07:17:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.786 07:17:08 -- accel/accel.sh@42 -- # jq -r . 00:07:46.786 [2024-11-28 07:17:08.958516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:46.786 [2024-11-28 07:17:08.958667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68873 ] 00:07:47.044 [2024-11-28 07:17:09.101957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.044 [2024-11-28 07:17:09.206114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.044 07:17:09 -- accel/accel.sh@21 -- # val= 00:07:47.044 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.044 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.044 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.044 07:17:09 -- accel/accel.sh@21 -- # val= 00:07:47.044 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.044 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.044 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.044 07:17:09 -- accel/accel.sh@21 -- # val=0x1 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val= 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val= 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val=dif_verify 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val= 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val=software 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val=32 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val=32 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val=1 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val=No 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val= 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:47.045 07:17:09 -- accel/accel.sh@21 -- # val= 00:07:47.045 07:17:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # IFS=: 00:07:47.045 07:17:09 -- accel/accel.sh@20 -- # read -r var val 00:07:48.420 07:17:10 -- accel/accel.sh@21 -- # val= 00:07:48.420 07:17:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # IFS=: 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # read -r var val 00:07:48.420 07:17:10 -- accel/accel.sh@21 -- # val= 00:07:48.420 07:17:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # IFS=: 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # read -r var val 00:07:48.420 07:17:10 -- accel/accel.sh@21 -- # val= 00:07:48.420 07:17:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # IFS=: 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # read -r var val 00:07:48.420 07:17:10 -- accel/accel.sh@21 -- # val= 00:07:48.420 07:17:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # IFS=: 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # read -r var val 00:07:48.420 07:17:10 -- accel/accel.sh@21 -- # val= 00:07:48.420 07:17:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # IFS=: 00:07:48.420 ************************************ 00:07:48.420 END TEST accel_dif_verify 00:07:48.420 ************************************ 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # read -r var val 00:07:48.420 07:17:10 -- accel/accel.sh@21 -- # val= 00:07:48.420 07:17:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # IFS=: 00:07:48.420 07:17:10 -- accel/accel.sh@20 -- # read -r var val 00:07:48.420 07:17:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.420 07:17:10 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:48.420 07:17:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.420 00:07:48.420 real 0m3.039s 00:07:48.420 user 0m2.555s 00:07:48.420 sys 0m0.270s 00:07:48.420 07:17:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.420 07:17:10 -- common/autotest_common.sh@10 -- # set +x 00:07:48.420 07:17:10 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:48.420 07:17:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:48.420 07:17:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.420 07:17:10 -- common/autotest_common.sh@10 -- # set +x 00:07:48.420 ************************************ 00:07:48.420 START TEST accel_dif_generate 00:07:48.420 ************************************ 00:07:48.420 07:17:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:48.420 07:17:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.420 07:17:10 -- accel/accel.sh@17 -- # local accel_module 00:07:48.420 07:17:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:48.420 07:17:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:48.420 07:17:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.420 07:17:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.420 07:17:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.420 07:17:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.420 07:17:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.420 07:17:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.420 07:17:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.420 07:17:10 -- accel/accel.sh@42 -- # jq -r . 00:07:48.420 [2024-11-28 07:17:10.536891] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.420 [2024-11-28 07:17:10.537031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68908 ] 00:07:48.420 [2024-11-28 07:17:10.677056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.678 [2024-11-28 07:17:10.782664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.049 07:17:12 -- accel/accel.sh@18 -- # out=' 00:07:50.049 SPDK Configuration: 00:07:50.049 Core mask: 0x1 00:07:50.049 00:07:50.049 Accel Perf Configuration: 00:07:50.049 Workload Type: dif_generate 00:07:50.049 Vector size: 4096 bytes 00:07:50.049 Transfer size: 4096 bytes 00:07:50.049 Block size: 512 bytes 00:07:50.049 Metadata size: 8 bytes 00:07:50.049 Vector count 1 00:07:50.049 Module: software 00:07:50.049 Queue depth: 32 00:07:50.049 Allocate depth: 32 00:07:50.049 # threads/core: 1 00:07:50.049 Run time: 1 seconds 00:07:50.049 Verify: No 00:07:50.049 00:07:50.049 Running for 1 seconds... 00:07:50.049 00:07:50.049 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.049 ------------------------------------------------------------------------------------ 00:07:50.049 0,0 119360/s 473 MiB/s 0 0 00:07:50.050 ==================================================================================== 00:07:50.050 Total 119360/s 466 MiB/s 0 0' 00:07:50.050 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.050 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.050 07:17:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:50.050 07:17:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.050 07:17:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:50.050 07:17:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.050 07:17:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.050 07:17:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.050 07:17:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.050 07:17:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.050 07:17:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.050 07:17:12 -- accel/accel.sh@42 -- # jq -r . 00:07:50.050 [2024-11-28 07:17:12.055935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.050 [2024-11-28 07:17:12.056065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68927 ] 00:07:50.050 [2024-11-28 07:17:12.199893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.050 [2024-11-28 07:17:12.312297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.306 07:17:12 -- accel/accel.sh@21 -- # val= 00:07:50.306 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.306 07:17:12 -- accel/accel.sh@21 -- # val= 00:07:50.306 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.306 07:17:12 -- accel/accel.sh@21 -- # val=0x1 00:07:50.306 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.306 07:17:12 -- accel/accel.sh@21 -- # val= 00:07:50.306 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.306 07:17:12 -- accel/accel.sh@21 -- # val= 00:07:50.306 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.306 07:17:12 -- accel/accel.sh@21 -- # val=dif_generate 00:07:50.306 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.306 07:17:12 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.306 07:17:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.306 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.306 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.306 07:17:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.307 07:17:12 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.307 07:17:12 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.307 07:17:12 -- accel/accel.sh@21 -- # val= 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.307 07:17:12 -- accel/accel.sh@21 -- # val=software 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.307 07:17:12 -- accel/accel.sh@21 -- # val=32 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.307 07:17:12 -- accel/accel.sh@21 -- # val=32 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.307 07:17:12 -- accel/accel.sh@21 -- # val=1 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.307 07:17:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.307 07:17:12 -- accel/accel.sh@21 -- # val=No 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.307 07:17:12 -- accel/accel.sh@21 -- # val= 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:50.307 07:17:12 -- accel/accel.sh@21 -- # val= 00:07:50.307 07:17:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # IFS=: 00:07:50.307 07:17:12 -- accel/accel.sh@20 -- # read -r var val 00:07:51.673 07:17:13 -- accel/accel.sh@21 -- # val= 00:07:51.673 07:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:51.673 07:17:13 -- accel/accel.sh@21 -- # val= 00:07:51.673 07:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:51.673 07:17:13 -- accel/accel.sh@21 -- # val= 00:07:51.673 07:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:51.673 07:17:13 -- accel/accel.sh@21 -- # val= 00:07:51.673 07:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:51.673 07:17:13 -- accel/accel.sh@21 -- # val= 00:07:51.673 07:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:51.673 07:17:13 -- accel/accel.sh@21 -- # val= 00:07:51.673 07:17:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # IFS=: 00:07:51.673 07:17:13 -- accel/accel.sh@20 -- # read -r var val 00:07:51.673 07:17:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.673 07:17:13 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:51.673 07:17:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.673 ************************************ 00:07:51.673 END TEST accel_dif_generate 00:07:51.673 ************************************ 00:07:51.673 00:07:51.674 real 0m3.039s 00:07:51.674 user 0m2.560s 00:07:51.674 sys 0m0.266s 00:07:51.674 07:17:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.674 07:17:13 -- common/autotest_common.sh@10 -- # set +x 00:07:51.674 07:17:13 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:51.674 07:17:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:51.674 07:17:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.674 07:17:13 -- common/autotest_common.sh@10 -- # set +x 00:07:51.674 ************************************ 00:07:51.674 START TEST accel_dif_generate_copy 00:07:51.674 ************************************ 00:07:51.674 07:17:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:51.674 07:17:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.674 07:17:13 -- accel/accel.sh@17 -- # local accel_module 00:07:51.674 07:17:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:51.674 07:17:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:51.674 07:17:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.674 07:17:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.674 07:17:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.674 07:17:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.674 07:17:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.674 07:17:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.674 07:17:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.674 07:17:13 -- accel/accel.sh@42 -- # jq -r . 00:07:51.674 [2024-11-28 07:17:13.632870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:51.674 [2024-11-28 07:17:13.632993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68962 ] 00:07:51.674 [2024-11-28 07:17:13.774262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.674 [2024-11-28 07:17:13.884511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.041 07:17:15 -- accel/accel.sh@18 -- # out=' 00:07:53.041 SPDK Configuration: 00:07:53.041 Core mask: 0x1 00:07:53.041 00:07:53.041 Accel Perf Configuration: 00:07:53.041 Workload Type: dif_generate_copy 00:07:53.041 Vector size: 4096 bytes 00:07:53.041 Transfer size: 4096 bytes 00:07:53.041 Vector count 1 00:07:53.041 Module: software 00:07:53.041 Queue depth: 32 00:07:53.041 Allocate depth: 32 00:07:53.041 # threads/core: 1 00:07:53.041 Run time: 1 seconds 00:07:53.041 Verify: No 00:07:53.041 00:07:53.041 Running for 1 seconds... 00:07:53.041 00:07:53.041 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.041 ------------------------------------------------------------------------------------ 00:07:53.041 0,0 90336/s 358 MiB/s 0 0 00:07:53.041 ==================================================================================== 00:07:53.041 Total 90336/s 352 MiB/s 0 0' 00:07:53.041 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.041 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.041 07:17:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:53.041 07:17:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:53.041 07:17:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.041 07:17:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.041 07:17:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.041 07:17:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.041 07:17:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.041 07:17:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.041 07:17:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.041 07:17:15 -- accel/accel.sh@42 -- # jq -r . 00:07:53.041 [2024-11-28 07:17:15.172476] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.041 [2024-11-28 07:17:15.172620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68980 ] 00:07:53.041 [2024-11-28 07:17:15.313099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.297 [2024-11-28 07:17:15.426562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val= 00:07:53.297 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val= 00:07:53.297 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val=0x1 00:07:53.297 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val= 00:07:53.297 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val= 00:07:53.297 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:53.297 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.297 07:17:15 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.297 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.297 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val= 00:07:53.297 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val=software 00:07:53.297 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.297 07:17:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val=32 00:07:53.297 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.297 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.297 07:17:15 -- accel/accel.sh@21 -- # val=32 00:07:53.298 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.298 07:17:15 -- accel/accel.sh@21 -- # val=1 00:07:53.298 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.298 07:17:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:53.298 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.298 07:17:15 -- accel/accel.sh@21 -- # val=No 00:07:53.298 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.298 07:17:15 -- accel/accel.sh@21 -- # val= 00:07:53.298 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:53.298 07:17:15 -- accel/accel.sh@21 -- # val= 00:07:53.298 07:17:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # IFS=: 00:07:53.298 07:17:15 -- accel/accel.sh@20 -- # read -r var val 00:07:54.671 07:17:16 -- accel/accel.sh@21 -- # val= 00:07:54.671 07:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:54.671 07:17:16 -- accel/accel.sh@21 -- # val= 00:07:54.671 07:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:54.671 07:17:16 -- accel/accel.sh@21 -- # val= 00:07:54.671 07:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:54.671 07:17:16 -- accel/accel.sh@21 -- # val= 00:07:54.671 07:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:54.671 07:17:16 -- accel/accel.sh@21 -- # val= 00:07:54.671 07:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:54.671 07:17:16 -- accel/accel.sh@21 -- # val= 00:07:54.671 07:17:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # IFS=: 00:07:54.671 07:17:16 -- accel/accel.sh@20 -- # read -r var val 00:07:54.671 07:17:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:54.671 07:17:16 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:54.671 07:17:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.671 00:07:54.671 real 0m3.108s 00:07:54.671 user 0m2.619s 00:07:54.671 sys 0m0.274s 00:07:54.671 07:17:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.671 07:17:16 -- common/autotest_common.sh@10 -- # set +x 00:07:54.671 ************************************ 00:07:54.671 END TEST accel_dif_generate_copy 00:07:54.671 ************************************ 00:07:54.671 07:17:16 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:54.671 07:17:16 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.671 07:17:16 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:54.671 07:17:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.671 07:17:16 -- common/autotest_common.sh@10 -- # set +x 00:07:54.671 ************************************ 00:07:54.671 START TEST accel_comp 00:07:54.671 ************************************ 00:07:54.671 07:17:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.671 07:17:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.671 07:17:16 -- accel/accel.sh@17 -- # local accel_module 00:07:54.671 07:17:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.671 07:17:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:54.671 07:17:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.671 07:17:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.671 07:17:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.671 07:17:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.671 07:17:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.671 07:17:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.671 07:17:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.671 07:17:16 -- accel/accel.sh@42 -- # jq -r . 00:07:54.671 [2024-11-28 07:17:16.803568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.671 [2024-11-28 07:17:16.803710] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69016 ] 00:07:54.671 [2024-11-28 07:17:16.943813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.930 [2024-11-28 07:17:17.047595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.304 07:17:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:56.304 00:07:56.304 SPDK Configuration: 00:07:56.304 Core mask: 0x1 00:07:56.304 00:07:56.304 Accel Perf Configuration: 00:07:56.304 Workload Type: compress 00:07:56.304 Transfer size: 4096 bytes 00:07:56.304 Vector count 1 00:07:56.304 Module: software 00:07:56.304 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:56.304 Queue depth: 32 00:07:56.304 Allocate depth: 32 00:07:56.304 # threads/core: 1 00:07:56.304 Run time: 1 seconds 00:07:56.304 Verify: No 00:07:56.304 00:07:56.304 Running for 1 seconds... 00:07:56.304 00:07:56.304 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:56.304 ------------------------------------------------------------------------------------ 00:07:56.304 0,0 46752/s 194 MiB/s 0 0 00:07:56.304 ==================================================================================== 00:07:56.304 Total 46752/s 182 MiB/s 0 0' 00:07:56.304 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.304 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.304 07:17:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:56.304 07:17:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:56.304 07:17:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.304 07:17:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.304 07:17:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.304 07:17:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.304 07:17:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.304 07:17:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.304 07:17:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.304 07:17:18 -- accel/accel.sh@42 -- # jq -r . 00:07:56.304 [2024-11-28 07:17:18.357616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:56.304 [2024-11-28 07:17:18.357783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69034 ] 00:07:56.304 [2024-11-28 07:17:18.502489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.563 [2024-11-28 07:17:18.617917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val= 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val= 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val= 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val=0x1 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val= 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val= 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val=compress 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val= 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val=software 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val=32 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val=32 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val=1 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val=No 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val= 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:56.563 07:17:18 -- accel/accel.sh@21 -- # val= 00:07:56.563 07:17:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # IFS=: 00:07:56.563 07:17:18 -- accel/accel.sh@20 -- # read -r var val 00:07:57.954 07:17:19 -- accel/accel.sh@21 -- # val= 00:07:57.954 07:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:57.954 07:17:19 -- accel/accel.sh@21 -- # val= 00:07:57.954 07:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:57.954 07:17:19 -- accel/accel.sh@21 -- # val= 00:07:57.954 07:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:57.954 07:17:19 -- accel/accel.sh@21 -- # val= 00:07:57.954 07:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:57.954 07:17:19 -- accel/accel.sh@21 -- # val= 00:07:57.954 07:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:57.954 07:17:19 -- accel/accel.sh@21 -- # val= 00:07:57.954 07:17:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # IFS=: 00:07:57.954 07:17:19 -- accel/accel.sh@20 -- # read -r var val 00:07:57.954 07:17:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:57.954 07:17:19 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:57.954 07:17:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.954 00:07:57.954 real 0m3.106s 00:07:57.954 user 0m2.592s 00:07:57.954 sys 0m0.298s 00:07:57.954 07:17:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:57.954 07:17:19 -- common/autotest_common.sh@10 -- # set +x 00:07:57.954 ************************************ 00:07:57.954 END TEST accel_comp 00:07:57.954 ************************************ 00:07:57.954 07:17:19 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.954 07:17:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:57.954 07:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.954 07:17:19 -- common/autotest_common.sh@10 -- # set +x 00:07:57.954 ************************************ 00:07:57.954 START TEST accel_decomp 00:07:57.954 ************************************ 00:07:57.954 07:17:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.954 07:17:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:57.954 07:17:19 -- accel/accel.sh@17 -- # local accel_module 00:07:57.954 07:17:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.954 07:17:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:57.954 07:17:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.954 07:17:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:57.954 07:17:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.954 07:17:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.954 07:17:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:57.954 07:17:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:57.954 07:17:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:57.954 07:17:19 -- accel/accel.sh@42 -- # jq -r . 00:07:57.954 [2024-11-28 07:17:19.961660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:57.954 [2024-11-28 07:17:19.961768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69072 ] 00:07:57.954 [2024-11-28 07:17:20.096522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.954 [2024-11-28 07:17:20.194390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.329 07:17:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:59.329 00:07:59.329 SPDK Configuration: 00:07:59.329 Core mask: 0x1 00:07:59.329 00:07:59.329 Accel Perf Configuration: 00:07:59.329 Workload Type: decompress 00:07:59.329 Transfer size: 4096 bytes 00:07:59.329 Vector count 1 00:07:59.329 Module: software 00:07:59.329 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:59.329 Queue depth: 32 00:07:59.329 Allocate depth: 32 00:07:59.329 # threads/core: 1 00:07:59.329 Run time: 1 seconds 00:07:59.329 Verify: Yes 00:07:59.329 00:07:59.329 Running for 1 seconds... 00:07:59.329 00:07:59.329 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:59.329 ------------------------------------------------------------------------------------ 00:07:59.329 0,0 66656/s 122 MiB/s 0 0 00:07:59.329 ==================================================================================== 00:07:59.329 Total 66656/s 260 MiB/s 0 0' 00:07:59.329 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.329 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.329 07:17:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:59.329 07:17:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.329 07:17:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:59.329 07:17:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.329 07:17:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.329 07:17:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.329 07:17:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.329 07:17:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.329 07:17:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.330 07:17:21 -- accel/accel.sh@42 -- # jq -r . 00:07:59.330 [2024-11-28 07:17:21.475996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.330 [2024-11-28 07:17:21.476114] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69086 ] 00:07:59.588 [2024-11-28 07:17:21.615467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.588 [2024-11-28 07:17:21.724728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val= 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val= 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val= 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val=0x1 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val= 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val= 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val=decompress 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val= 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val=software 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val=32 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val=32 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val=1 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val=Yes 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val= 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:07:59.588 07:17:21 -- accel/accel.sh@21 -- # val= 00:07:59.588 07:17:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # IFS=: 00:07:59.588 07:17:21 -- accel/accel.sh@20 -- # read -r var val 00:08:00.963 07:17:22 -- accel/accel.sh@21 -- # val= 00:08:00.963 07:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # IFS=: 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # read -r var val 00:08:00.963 07:17:22 -- accel/accel.sh@21 -- # val= 00:08:00.963 07:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # IFS=: 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # read -r var val 00:08:00.963 07:17:22 -- accel/accel.sh@21 -- # val= 00:08:00.963 07:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # IFS=: 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # read -r var val 00:08:00.963 07:17:22 -- accel/accel.sh@21 -- # val= 00:08:00.963 07:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # IFS=: 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # read -r var val 00:08:00.963 07:17:22 -- accel/accel.sh@21 -- # val= 00:08:00.963 07:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # IFS=: 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # read -r var val 00:08:00.963 07:17:22 -- accel/accel.sh@21 -- # val= 00:08:00.963 07:17:22 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # IFS=: 00:08:00.963 07:17:22 -- accel/accel.sh@20 -- # read -r var val 00:08:00.963 07:17:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:00.963 07:17:22 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:00.963 ************************************ 00:08:00.963 END TEST accel_decomp 00:08:00.963 ************************************ 00:08:00.963 07:17:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.963 00:08:00.963 real 0m3.051s 00:08:00.963 user 0m2.580s 00:08:00.963 sys 0m0.257s 00:08:00.963 07:17:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:00.963 07:17:22 -- common/autotest_common.sh@10 -- # set +x 00:08:00.963 07:17:23 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:00.963 07:17:23 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:00.963 07:17:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.963 07:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:00.963 ************************************ 00:08:00.963 START TEST accel_decmop_full 00:08:00.963 ************************************ 00:08:00.963 07:17:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:00.963 07:17:23 -- accel/accel.sh@16 -- # local accel_opc 00:08:00.963 07:17:23 -- accel/accel.sh@17 -- # local accel_module 00:08:00.963 07:17:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:00.963 07:17:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:00.963 07:17:23 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.963 07:17:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:00.963 07:17:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.963 07:17:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.963 07:17:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:00.963 07:17:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:00.963 07:17:23 -- accel/accel.sh@41 -- # local IFS=, 00:08:00.963 07:17:23 -- accel/accel.sh@42 -- # jq -r . 00:08:00.963 [2024-11-28 07:17:23.073556] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:00.963 [2024-11-28 07:17:23.073693] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69126 ] 00:08:00.963 [2024-11-28 07:17:23.213621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.221 [2024-11-28 07:17:23.313984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.605 07:17:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:02.605 00:08:02.605 SPDK Configuration: 00:08:02.605 Core mask: 0x1 00:08:02.605 00:08:02.605 Accel Perf Configuration: 00:08:02.605 Workload Type: decompress 00:08:02.605 Transfer size: 111250 bytes 00:08:02.605 Vector count 1 00:08:02.605 Module: software 00:08:02.605 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:02.605 Queue depth: 32 00:08:02.605 Allocate depth: 32 00:08:02.605 # threads/core: 1 00:08:02.605 Run time: 1 seconds 00:08:02.605 Verify: Yes 00:08:02.605 00:08:02.605 Running for 1 seconds... 00:08:02.605 00:08:02.605 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:02.605 ------------------------------------------------------------------------------------ 00:08:02.605 0,0 4256/s 175 MiB/s 0 0 00:08:02.605 ==================================================================================== 00:08:02.605 Total 4256/s 451 MiB/s 0 0' 00:08:02.605 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.605 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.605 07:17:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:02.605 07:17:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:02.605 07:17:24 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.605 07:17:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.605 07:17:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.605 07:17:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.605 07:17:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.605 07:17:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.605 07:17:24 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.605 07:17:24 -- accel/accel.sh@42 -- # jq -r . 00:08:02.605 [2024-11-28 07:17:24.632502] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:02.605 [2024-11-28 07:17:24.632626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69140 ] 00:08:02.605 [2024-11-28 07:17:24.773397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.864 [2024-11-28 07:17:24.881410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val= 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val= 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val= 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val=0x1 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val= 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val= 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val=decompress 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val= 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val=software 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@23 -- # accel_module=software 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val=32 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val=32 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val=1 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val=Yes 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val= 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:02.864 07:17:24 -- accel/accel.sh@21 -- # val= 00:08:02.864 07:17:24 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.864 07:17:24 -- accel/accel.sh@20 -- # IFS=: 00:08:02.865 07:17:24 -- accel/accel.sh@20 -- # read -r var val 00:08:04.238 07:17:26 -- accel/accel.sh@21 -- # val= 00:08:04.238 07:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # IFS=: 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # read -r var val 00:08:04.238 07:17:26 -- accel/accel.sh@21 -- # val= 00:08:04.238 07:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # IFS=: 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # read -r var val 00:08:04.238 07:17:26 -- accel/accel.sh@21 -- # val= 00:08:04.238 07:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # IFS=: 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # read -r var val 00:08:04.238 07:17:26 -- accel/accel.sh@21 -- # val= 00:08:04.238 07:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # IFS=: 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # read -r var val 00:08:04.238 07:17:26 -- accel/accel.sh@21 -- # val= 00:08:04.238 07:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # IFS=: 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # read -r var val 00:08:04.238 07:17:26 -- accel/accel.sh@21 -- # val= 00:08:04.238 07:17:26 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # IFS=: 00:08:04.238 07:17:26 -- accel/accel.sh@20 -- # read -r var val 00:08:04.238 07:17:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:04.238 07:17:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:04.238 07:17:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.238 00:08:04.238 real 0m3.099s 00:08:04.238 user 0m2.616s 00:08:04.238 sys 0m0.269s 00:08:04.238 07:17:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.238 ************************************ 00:08:04.238 END TEST accel_decmop_full 00:08:04.238 ************************************ 00:08:04.238 07:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:04.238 07:17:26 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:04.238 07:17:26 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:04.238 07:17:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.238 07:17:26 -- common/autotest_common.sh@10 -- # set +x 00:08:04.238 ************************************ 00:08:04.238 START TEST accel_decomp_mcore 00:08:04.238 ************************************ 00:08:04.238 07:17:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:04.238 07:17:26 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.238 07:17:26 -- accel/accel.sh@17 -- # local accel_module 00:08:04.238 07:17:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:04.238 07:17:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:04.238 07:17:26 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.238 07:17:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.238 07:17:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.238 07:17:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.238 07:17:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.238 07:17:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.238 07:17:26 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.238 07:17:26 -- accel/accel.sh@42 -- # jq -r . 00:08:04.238 [2024-11-28 07:17:26.239467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.238 [2024-11-28 07:17:26.239636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69180 ] 00:08:04.238 [2024-11-28 07:17:26.379851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.238 [2024-11-28 07:17:26.496923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.238 [2024-11-28 07:17:26.497093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.238 [2024-11-28 07:17:26.497264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.238 [2024-11-28 07:17:26.497278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.613 07:17:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:05.613 00:08:05.613 SPDK Configuration: 00:08:05.613 Core mask: 0xf 00:08:05.613 00:08:05.613 Accel Perf Configuration: 00:08:05.613 Workload Type: decompress 00:08:05.613 Transfer size: 4096 bytes 00:08:05.613 Vector count 1 00:08:05.613 Module: software 00:08:05.613 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:05.613 Queue depth: 32 00:08:05.613 Allocate depth: 32 00:08:05.613 # threads/core: 1 00:08:05.613 Run time: 1 seconds 00:08:05.613 Verify: Yes 00:08:05.613 00:08:05.613 Running for 1 seconds... 00:08:05.613 00:08:05.613 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:05.613 ------------------------------------------------------------------------------------ 00:08:05.613 0,0 46976/s 86 MiB/s 0 0 00:08:05.613 3,0 50592/s 93 MiB/s 0 0 00:08:05.613 2,0 50912/s 93 MiB/s 0 0 00:08:05.613 1,0 52480/s 96 MiB/s 0 0 00:08:05.613 ==================================================================================== 00:08:05.613 Total 200960/s 785 MiB/s 0 0' 00:08:05.613 07:17:27 -- accel/accel.sh@20 -- # IFS=: 00:08:05.613 07:17:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:05.613 07:17:27 -- accel/accel.sh@20 -- # read -r var val 00:08:05.613 07:17:27 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.613 07:17:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:05.613 07:17:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.613 07:17:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.614 07:17:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.614 07:17:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.614 07:17:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.614 07:17:27 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.614 07:17:27 -- accel/accel.sh@42 -- # jq -r . 00:08:05.614 [2024-11-28 07:17:27.800244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:05.614 [2024-11-28 07:17:27.800777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69203 ] 00:08:05.872 [2024-11-28 07:17:27.941858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.872 [2024-11-28 07:17:28.056886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.872 [2024-11-28 07:17:28.057018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.872 [2024-11-28 07:17:28.057176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.872 [2024-11-28 07:17:28.057189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.872 07:17:28 -- accel/accel.sh@21 -- # val= 00:08:05.872 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.872 07:17:28 -- accel/accel.sh@21 -- # val= 00:08:05.872 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.872 07:17:28 -- accel/accel.sh@21 -- # val= 00:08:05.872 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.872 07:17:28 -- accel/accel.sh@21 -- # val=0xf 00:08:05.872 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.872 07:17:28 -- accel/accel.sh@21 -- # val= 00:08:05.872 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.872 07:17:28 -- accel/accel.sh@21 -- # val= 00:08:05.872 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.872 07:17:28 -- accel/accel.sh@21 -- # val=decompress 00:08:05.872 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.872 07:17:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:05.872 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.873 07:17:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:05.873 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.873 07:17:28 -- accel/accel.sh@21 -- # val= 00:08:05.873 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.873 07:17:28 -- accel/accel.sh@21 -- # val=software 00:08:05.873 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.873 07:17:28 -- accel/accel.sh@23 -- # accel_module=software 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.873 07:17:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:05.873 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.873 07:17:28 -- accel/accel.sh@21 -- # val=32 00:08:05.873 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.873 07:17:28 -- accel/accel.sh@21 -- # val=32 00:08:05.873 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.873 07:17:28 -- accel/accel.sh@21 -- # val=1 00:08:05.873 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:05.873 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:05.873 07:17:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:06.131 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.131 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:06.131 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:06.131 07:17:28 -- accel/accel.sh@21 -- # val=Yes 00:08:06.131 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.131 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:06.131 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:06.131 07:17:28 -- accel/accel.sh@21 -- # val= 00:08:06.131 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.131 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:06.131 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:06.131 07:17:28 -- accel/accel.sh@21 -- # val= 00:08:06.131 07:17:28 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.131 07:17:28 -- accel/accel.sh@20 -- # IFS=: 00:08:06.131 07:17:28 -- accel/accel.sh@20 -- # read -r var val 00:08:07.066 07:17:29 -- accel/accel.sh@21 -- # val= 00:08:07.066 07:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.066 07:17:29 -- accel/accel.sh@20 -- # IFS=: 00:08:07.066 07:17:29 -- accel/accel.sh@20 -- # read -r var val 00:08:07.066 07:17:29 -- accel/accel.sh@21 -- # val= 00:08:07.067 07:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.067 07:17:29 -- accel/accel.sh@20 -- # IFS=: 00:08:07.067 07:17:29 -- accel/accel.sh@20 -- # read -r var val 00:08:07.067 07:17:29 -- accel/accel.sh@21 -- # val= 00:08:07.067 07:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.067 07:17:29 -- accel/accel.sh@20 -- # IFS=: 00:08:07.067 07:17:29 -- accel/accel.sh@20 -- # read -r var val 00:08:07.067 07:17:29 -- accel/accel.sh@21 -- # val= 00:08:07.067 07:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.067 07:17:29 -- accel/accel.sh@20 -- # IFS=: 00:08:07.438 07:17:29 -- accel/accel.sh@20 -- # read -r var val 00:08:07.438 07:17:29 -- accel/accel.sh@21 -- # val= 00:08:07.438 07:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.438 07:17:29 -- accel/accel.sh@20 -- # IFS=: 00:08:07.439 07:17:29 -- accel/accel.sh@20 -- # read -r var val 00:08:07.439 07:17:29 -- accel/accel.sh@21 -- # val= 00:08:07.439 07:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.439 07:17:29 -- accel/accel.sh@20 -- # IFS=: 00:08:07.439 07:17:29 -- accel/accel.sh@20 -- # read -r var val 00:08:07.439 07:17:29 -- accel/accel.sh@21 -- # val= 00:08:07.439 07:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.439 07:17:29 -- accel/accel.sh@20 -- # IFS=: 00:08:07.439 07:17:29 -- accel/accel.sh@20 -- # read -r var val 00:08:07.439 07:17:29 -- accel/accel.sh@21 -- # val= 00:08:07.439 07:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.439 07:17:29 -- accel/accel.sh@20 -- # IFS=: 00:08:07.439 ************************************ 00:08:07.439 END TEST accel_decomp_mcore 00:08:07.439 ************************************ 00:08:07.439 07:17:29 -- accel/accel.sh@20 -- # read -r var val 00:08:07.439 07:17:29 -- accel/accel.sh@21 -- # val= 00:08:07.439 07:17:29 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.439 07:17:29 -- accel/accel.sh@20 -- # IFS=: 00:08:07.439 07:17:29 -- accel/accel.sh@20 -- # read -r var val 00:08:07.439 07:17:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:07.439 07:17:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:07.439 07:17:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.439 00:08:07.439 real 0m3.138s 00:08:07.439 user 0m9.640s 00:08:07.439 sys 0m0.314s 00:08:07.439 07:17:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:07.439 07:17:29 -- common/autotest_common.sh@10 -- # set +x 00:08:07.439 07:17:29 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:07.439 07:17:29 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:07.439 07:17:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.439 07:17:29 -- common/autotest_common.sh@10 -- # set +x 00:08:07.439 ************************************ 00:08:07.439 START TEST accel_decomp_full_mcore 00:08:07.439 ************************************ 00:08:07.439 07:17:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:07.439 07:17:29 -- accel/accel.sh@16 -- # local accel_opc 00:08:07.439 07:17:29 -- accel/accel.sh@17 -- # local accel_module 00:08:07.439 07:17:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:07.439 07:17:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:07.439 07:17:29 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.439 07:17:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.439 07:17:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.439 07:17:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.439 07:17:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.439 07:17:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.439 07:17:29 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.439 07:17:29 -- accel/accel.sh@42 -- # jq -r . 00:08:07.439 [2024-11-28 07:17:29.431617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:07.439 [2024-11-28 07:17:29.431734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69240 ] 00:08:07.439 [2024-11-28 07:17:29.567756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.439 [2024-11-28 07:17:29.679822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.439 [2024-11-28 07:17:29.679970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.439 [2024-11-28 07:17:29.680115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.439 [2024-11-28 07:17:29.680124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.841 07:17:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:08.841 00:08:08.841 SPDK Configuration: 00:08:08.841 Core mask: 0xf 00:08:08.841 00:08:08.841 Accel Perf Configuration: 00:08:08.841 Workload Type: decompress 00:08:08.841 Transfer size: 111250 bytes 00:08:08.841 Vector count 1 00:08:08.841 Module: software 00:08:08.841 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:08.841 Queue depth: 32 00:08:08.841 Allocate depth: 32 00:08:08.841 # threads/core: 1 00:08:08.841 Run time: 1 seconds 00:08:08.841 Verify: Yes 00:08:08.841 00:08:08.841 Running for 1 seconds... 00:08:08.841 00:08:08.841 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:08.841 ------------------------------------------------------------------------------------ 00:08:08.841 0,0 4128/s 170 MiB/s 0 0 00:08:08.841 3,0 3840/s 158 MiB/s 0 0 00:08:08.841 2,0 4256/s 175 MiB/s 0 0 00:08:08.841 1,0 4384/s 181 MiB/s 0 0 00:08:08.841 ==================================================================================== 00:08:08.841 Total 16608/s 1762 MiB/s 0 0' 00:08:08.841 07:17:30 -- accel/accel.sh@20 -- # IFS=: 00:08:08.841 07:17:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:08.841 07:17:30 -- accel/accel.sh@20 -- # read -r var val 00:08:08.841 07:17:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:08.841 07:17:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.841 07:17:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.841 07:17:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.841 07:17:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.841 07:17:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.841 07:17:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.841 07:17:30 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.841 07:17:30 -- accel/accel.sh@42 -- # jq -r . 00:08:08.841 [2024-11-28 07:17:31.005919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.841 [2024-11-28 07:17:31.006024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69267 ] 00:08:09.100 [2024-11-28 07:17:31.142261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.100 [2024-11-28 07:17:31.245026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.100 [2024-11-28 07:17:31.245140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.100 [2024-11-28 07:17:31.245277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.100 [2024-11-28 07:17:31.245284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val= 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val= 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val= 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val=0xf 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val= 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val= 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val=decompress 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val= 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val=software 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@23 -- # accel_module=software 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val=32 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val=32 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val=1 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val=Yes 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val= 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:09.100 07:17:31 -- accel/accel.sh@21 -- # val= 00:08:09.100 07:17:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # IFS=: 00:08:09.100 07:17:31 -- accel/accel.sh@20 -- # read -r var val 00:08:10.477 07:17:32 -- accel/accel.sh@21 -- # val= 00:08:10.477 07:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.477 07:17:32 -- accel/accel.sh@21 -- # val= 00:08:10.477 07:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.477 07:17:32 -- accel/accel.sh@21 -- # val= 00:08:10.477 07:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.477 07:17:32 -- accel/accel.sh@21 -- # val= 00:08:10.477 07:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.477 07:17:32 -- accel/accel.sh@21 -- # val= 00:08:10.477 07:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.477 07:17:32 -- accel/accel.sh@21 -- # val= 00:08:10.477 07:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.477 07:17:32 -- accel/accel.sh@21 -- # val= 00:08:10.477 07:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.477 07:17:32 -- accel/accel.sh@21 -- # val= 00:08:10.477 07:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.477 07:17:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.477 07:17:32 -- accel/accel.sh@21 -- # val= 00:08:10.477 07:17:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.478 07:17:32 -- accel/accel.sh@20 -- # IFS=: 00:08:10.478 07:17:32 -- accel/accel.sh@20 -- # read -r var val 00:08:10.478 07:17:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:10.478 07:17:32 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:10.478 07:17:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.478 00:08:10.478 real 0m3.213s 00:08:10.478 user 0m10.003s 00:08:10.478 sys 0m0.288s 00:08:10.478 ************************************ 00:08:10.478 END TEST accel_decomp_full_mcore 00:08:10.478 ************************************ 00:08:10.478 07:17:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:10.478 07:17:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.478 07:17:32 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:10.478 07:17:32 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:10.478 07:17:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.478 07:17:32 -- common/autotest_common.sh@10 -- # set +x 00:08:10.478 ************************************ 00:08:10.478 START TEST accel_decomp_mthread 00:08:10.478 ************************************ 00:08:10.478 07:17:32 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:10.478 07:17:32 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.478 07:17:32 -- accel/accel.sh@17 -- # local accel_module 00:08:10.478 07:17:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:10.478 07:17:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:10.478 07:17:32 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.478 07:17:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.478 07:17:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.478 07:17:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.478 07:17:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.478 07:17:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.478 07:17:32 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.478 07:17:32 -- accel/accel.sh@42 -- # jq -r . 00:08:10.478 [2024-11-28 07:17:32.700568] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:10.478 [2024-11-28 07:17:32.700684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69300 ] 00:08:10.736 [2024-11-28 07:17:32.832893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.736 [2024-11-28 07:17:32.960109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.114 07:17:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:12.114 00:08:12.114 SPDK Configuration: 00:08:12.114 Core mask: 0x1 00:08:12.114 00:08:12.114 Accel Perf Configuration: 00:08:12.114 Workload Type: decompress 00:08:12.114 Transfer size: 4096 bytes 00:08:12.114 Vector count 1 00:08:12.114 Module: software 00:08:12.114 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.114 Queue depth: 32 00:08:12.114 Allocate depth: 32 00:08:12.114 # threads/core: 2 00:08:12.114 Run time: 1 seconds 00:08:12.114 Verify: Yes 00:08:12.114 00:08:12.114 Running for 1 seconds... 00:08:12.114 00:08:12.114 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:12.114 ------------------------------------------------------------------------------------ 00:08:12.114 0,1 33632/s 61 MiB/s 0 0 00:08:12.114 0,0 33536/s 61 MiB/s 0 0 00:08:12.114 ==================================================================================== 00:08:12.114 Total 67168/s 262 MiB/s 0 0' 00:08:12.114 07:17:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:12.114 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.114 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.114 07:17:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:12.114 07:17:34 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.114 07:17:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.114 07:17:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.114 07:17:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.114 07:17:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.114 07:17:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.114 07:17:34 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.114 07:17:34 -- accel/accel.sh@42 -- # jq -r . 00:08:12.114 [2024-11-28 07:17:34.217326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:12.114 [2024-11-28 07:17:34.217466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69325 ] 00:08:12.114 [2024-11-28 07:17:34.355147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.373 [2024-11-28 07:17:34.449527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val= 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val= 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val= 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val=0x1 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val= 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val= 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val=decompress 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val= 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val=software 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@23 -- # accel_module=software 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val=32 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val=32 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val=2 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val=Yes 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val= 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:12.373 07:17:34 -- accel/accel.sh@21 -- # val= 00:08:12.373 07:17:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # IFS=: 00:08:12.373 07:17:34 -- accel/accel.sh@20 -- # read -r var val 00:08:13.751 07:17:35 -- accel/accel.sh@21 -- # val= 00:08:13.751 07:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.751 07:17:35 -- accel/accel.sh@21 -- # val= 00:08:13.751 07:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.751 07:17:35 -- accel/accel.sh@21 -- # val= 00:08:13.751 07:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.751 07:17:35 -- accel/accel.sh@21 -- # val= 00:08:13.751 07:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.751 07:17:35 -- accel/accel.sh@21 -- # val= 00:08:13.751 07:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.751 07:17:35 -- accel/accel.sh@21 -- # val= 00:08:13.751 07:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.751 07:17:35 -- accel/accel.sh@21 -- # val= 00:08:13.751 07:17:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # IFS=: 00:08:13.751 07:17:35 -- accel/accel.sh@20 -- # read -r var val 00:08:13.751 07:17:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:13.751 07:17:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:13.751 07:17:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.751 00:08:13.751 real 0m3.024s 00:08:13.751 user 0m2.557s 00:08:13.751 sys 0m0.254s 00:08:13.751 07:17:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.751 07:17:35 -- common/autotest_common.sh@10 -- # set +x 00:08:13.751 ************************************ 00:08:13.751 END TEST accel_decomp_mthread 00:08:13.751 ************************************ 00:08:13.751 07:17:35 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.751 07:17:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:13.751 07:17:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.751 07:17:35 -- common/autotest_common.sh@10 -- # set +x 00:08:13.751 ************************************ 00:08:13.751 START TEST accel_deomp_full_mthread 00:08:13.751 ************************************ 00:08:13.751 07:17:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.751 07:17:35 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.751 07:17:35 -- accel/accel.sh@17 -- # local accel_module 00:08:13.751 07:17:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.751 07:17:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:13.751 07:17:35 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.751 07:17:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.751 07:17:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.751 07:17:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.751 07:17:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.751 07:17:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.751 07:17:35 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.751 07:17:35 -- accel/accel.sh@42 -- # jq -r . 00:08:13.751 [2024-11-28 07:17:35.782657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:13.751 [2024-11-28 07:17:35.783522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69354 ] 00:08:13.751 [2024-11-28 07:17:35.922214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.751 [2024-11-28 07:17:36.022576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.126 07:17:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:15.126 00:08:15.126 SPDK Configuration: 00:08:15.126 Core mask: 0x1 00:08:15.126 00:08:15.126 Accel Perf Configuration: 00:08:15.126 Workload Type: decompress 00:08:15.126 Transfer size: 111250 bytes 00:08:15.126 Vector count 1 00:08:15.126 Module: software 00:08:15.126 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.126 Queue depth: 32 00:08:15.126 Allocate depth: 32 00:08:15.126 # threads/core: 2 00:08:15.126 Run time: 1 seconds 00:08:15.126 Verify: Yes 00:08:15.126 00:08:15.126 Running for 1 seconds... 00:08:15.126 00:08:15.126 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:15.126 ------------------------------------------------------------------------------------ 00:08:15.126 0,1 2176/s 89 MiB/s 0 0 00:08:15.126 0,0 2176/s 89 MiB/s 0 0 00:08:15.126 ==================================================================================== 00:08:15.126 Total 4352/s 461 MiB/s 0 0' 00:08:15.126 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.126 07:17:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:15.126 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.126 07:17:37 -- accel/accel.sh@12 -- # build_accel_config 00:08:15.126 07:17:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:15.126 07:17:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:15.126 07:17:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.126 07:17:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.126 07:17:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:15.126 07:17:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:15.126 07:17:37 -- accel/accel.sh@41 -- # local IFS=, 00:08:15.126 07:17:37 -- accel/accel.sh@42 -- # jq -r . 00:08:15.126 [2024-11-28 07:17:37.297699] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.126 [2024-11-28 07:17:37.297827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69374 ] 00:08:15.385 [2024-11-28 07:17:37.433621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.385 [2024-11-28 07:17:37.531516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val= 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val= 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val= 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val=0x1 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val= 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val= 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val=decompress 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val= 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val=software 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@23 -- # accel_module=software 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val=32 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val=32 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val=2 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val=Yes 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.385 07:17:37 -- accel/accel.sh@21 -- # val= 00:08:15.385 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.385 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.386 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:15.386 07:17:37 -- accel/accel.sh@21 -- # val= 00:08:15.386 07:17:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:15.386 07:17:37 -- accel/accel.sh@20 -- # IFS=: 00:08:15.386 07:17:37 -- accel/accel.sh@20 -- # read -r var val 00:08:16.762 07:17:38 -- accel/accel.sh@21 -- # val= 00:08:16.762 07:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # IFS=: 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # read -r var val 00:08:16.762 07:17:38 -- accel/accel.sh@21 -- # val= 00:08:16.762 07:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # IFS=: 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # read -r var val 00:08:16.762 07:17:38 -- accel/accel.sh@21 -- # val= 00:08:16.762 07:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # IFS=: 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # read -r var val 00:08:16.762 07:17:38 -- accel/accel.sh@21 -- # val= 00:08:16.762 07:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # IFS=: 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # read -r var val 00:08:16.762 07:17:38 -- accel/accel.sh@21 -- # val= 00:08:16.762 07:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # IFS=: 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # read -r var val 00:08:16.762 07:17:38 -- accel/accel.sh@21 -- # val= 00:08:16.762 07:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # IFS=: 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # read -r var val 00:08:16.762 07:17:38 -- accel/accel.sh@21 -- # val= 00:08:16.762 07:17:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # IFS=: 00:08:16.762 07:17:38 -- accel/accel.sh@20 -- # read -r var val 00:08:16.762 07:17:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:16.762 07:17:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:16.762 07:17:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.763 00:08:16.763 real 0m3.038s 00:08:16.763 user 0m2.580s 00:08:16.763 sys 0m0.245s 00:08:16.763 07:17:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:16.763 ************************************ 00:08:16.763 END TEST accel_deomp_full_mthread 00:08:16.763 ************************************ 00:08:16.763 07:17:38 -- common/autotest_common.sh@10 -- # set +x 00:08:16.763 07:17:38 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:16.763 07:17:38 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:16.763 07:17:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:16.763 07:17:38 -- accel/accel.sh@129 -- # build_accel_config 00:08:16.763 07:17:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.763 07:17:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:16.763 07:17:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.763 07:17:38 -- common/autotest_common.sh@10 -- # set +x 00:08:16.763 07:17:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.763 07:17:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:16.763 07:17:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:16.763 07:17:38 -- accel/accel.sh@41 -- # local IFS=, 00:08:16.763 07:17:38 -- accel/accel.sh@42 -- # jq -r . 00:08:16.763 ************************************ 00:08:16.763 START TEST accel_dif_functional_tests 00:08:16.763 ************************************ 00:08:16.763 07:17:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:16.763 [2024-11-28 07:17:38.903062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.763 [2024-11-28 07:17:38.903623] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69409 ] 00:08:17.021 [2024-11-28 07:17:39.046056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:17.021 [2024-11-28 07:17:39.167659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.021 [2024-11-28 07:17:39.167807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.021 [2024-11-28 07:17:39.167813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.021 00:08:17.021 00:08:17.021 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.021 http://cunit.sourceforge.net/ 00:08:17.021 00:08:17.021 00:08:17.021 Suite: accel_dif 00:08:17.021 Test: verify: DIF generated, GUARD check ...passed 00:08:17.021 Test: verify: DIF generated, APPTAG check ...passed 00:08:17.021 Test: verify: DIF generated, REFTAG check ...passed 00:08:17.021 Test: verify: DIF not generated, GUARD check ...passed 00:08:17.021 Test: verify: DIF not generated, APPTAG check ...passed 00:08:17.021 Test: verify: DIF not generated, REFTAG check ...[2024-11-28 07:17:39.288411] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:17.021 [2024-11-28 07:17:39.288635] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:17.021 [2024-11-28 07:17:39.288675] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:17.021 [2024-11-28 07:17:39.288699] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:17.022 passed 00:08:17.022 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:17.022 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:17.022 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:17.022 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:17.022 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:17.022 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-28 07:17:39.288726] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:17.022 [2024-11-28 07:17:39.288801] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:17.022 [2024-11-28 07:17:39.288864] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:17.022 [2024-11-28 07:17:39.289085] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:17.022 passed 00:08:17.022 Test: generate copy: DIF generated, GUARD check ...passed 00:08:17.022 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:17.022 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:17.022 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:17.022 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:17.022 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:17.022 Test: generate copy: iovecs-len validate ...[2024-11-28 07:17:39.289681] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:17.022 passed 00:08:17.022 Test: generate copy: buffer alignment validate ...passed 00:08:17.022 00:08:17.022 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.022 suites 1 1 n/a 0 0 00:08:17.022 tests 20 20 20 0 0 00:08:17.022 asserts 204 204 204 0 n/a 00:08:17.022 00:08:17.022 Elapsed time = 0.005 seconds 00:08:17.292 ************************************ 00:08:17.292 END TEST accel_dif_functional_tests 00:08:17.292 00:08:17.292 real 0m0.709s 00:08:17.292 user 0m1.016s 00:08:17.292 sys 0m0.193s 00:08:17.292 07:17:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.292 07:17:39 -- common/autotest_common.sh@10 -- # set +x 00:08:17.292 ************************************ 00:08:17.551 00:08:17.551 real 1m6.591s 00:08:17.551 user 1m10.448s 00:08:17.551 sys 0m7.248s 00:08:17.551 07:17:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.551 07:17:39 -- common/autotest_common.sh@10 -- # set +x 00:08:17.551 ************************************ 00:08:17.551 END TEST accel 00:08:17.551 ************************************ 00:08:17.551 07:17:39 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:17.551 07:17:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.551 07:17:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.551 07:17:39 -- common/autotest_common.sh@10 -- # set +x 00:08:17.551 ************************************ 00:08:17.551 START TEST accel_rpc 00:08:17.551 ************************************ 00:08:17.551 07:17:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:17.551 * Looking for test storage... 00:08:17.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:17.551 07:17:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.551 07:17:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.551 07:17:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.551 07:17:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.551 07:17:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.551 07:17:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.551 07:17:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.551 07:17:39 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.551 07:17:39 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.551 07:17:39 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.551 07:17:39 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.551 07:17:39 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.551 07:17:39 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.551 07:17:39 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.551 07:17:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.551 07:17:39 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.551 07:17:39 -- scripts/common.sh@344 -- # : 1 00:08:17.551 07:17:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.551 07:17:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.551 07:17:39 -- scripts/common.sh@364 -- # decimal 1 00:08:17.551 07:17:39 -- scripts/common.sh@352 -- # local d=1 00:08:17.551 07:17:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.551 07:17:39 -- scripts/common.sh@354 -- # echo 1 00:08:17.551 07:17:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.551 07:17:39 -- scripts/common.sh@365 -- # decimal 2 00:08:17.551 07:17:39 -- scripts/common.sh@352 -- # local d=2 00:08:17.551 07:17:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.810 07:17:39 -- scripts/common.sh@354 -- # echo 2 00:08:17.810 07:17:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.810 07:17:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.810 07:17:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.810 07:17:39 -- scripts/common.sh@367 -- # return 0 00:08:17.810 07:17:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.810 07:17:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.810 --rc genhtml_branch_coverage=1 00:08:17.810 --rc genhtml_function_coverage=1 00:08:17.810 --rc genhtml_legend=1 00:08:17.810 --rc geninfo_all_blocks=1 00:08:17.810 --rc geninfo_unexecuted_blocks=1 00:08:17.810 00:08:17.810 ' 00:08:17.810 07:17:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.810 --rc genhtml_branch_coverage=1 00:08:17.810 --rc genhtml_function_coverage=1 00:08:17.810 --rc genhtml_legend=1 00:08:17.810 --rc geninfo_all_blocks=1 00:08:17.810 --rc geninfo_unexecuted_blocks=1 00:08:17.810 00:08:17.810 ' 00:08:17.810 07:17:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.810 --rc genhtml_branch_coverage=1 00:08:17.810 --rc genhtml_function_coverage=1 00:08:17.810 --rc genhtml_legend=1 00:08:17.810 --rc geninfo_all_blocks=1 00:08:17.810 --rc geninfo_unexecuted_blocks=1 00:08:17.810 00:08:17.810 ' 00:08:17.810 07:17:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.810 --rc genhtml_branch_coverage=1 00:08:17.810 --rc genhtml_function_coverage=1 00:08:17.810 --rc genhtml_legend=1 00:08:17.810 --rc geninfo_all_blocks=1 00:08:17.810 --rc geninfo_unexecuted_blocks=1 00:08:17.810 00:08:17.810 ' 00:08:17.810 07:17:39 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:17.810 07:17:39 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=69486 00:08:17.810 07:17:39 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:17.810 07:17:39 -- accel/accel_rpc.sh@15 -- # waitforlisten 69486 00:08:17.810 07:17:39 -- common/autotest_common.sh@829 -- # '[' -z 69486 ']' 00:08:17.810 07:17:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.810 07:17:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.810 07:17:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.810 07:17:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.810 07:17:39 -- common/autotest_common.sh@10 -- # set +x 00:08:17.810 [2024-11-28 07:17:39.888754] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:17.810 [2024-11-28 07:17:39.889125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69486 ] 00:08:17.810 [2024-11-28 07:17:40.031757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.068 [2024-11-28 07:17:40.141836] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:18.068 [2024-11-28 07:17:40.142289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.004 07:17:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.004 07:17:40 -- common/autotest_common.sh@862 -- # return 0 00:08:19.004 07:17:40 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:19.004 07:17:40 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:19.004 07:17:40 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:19.004 07:17:40 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:19.004 07:17:40 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:19.004 07:17:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.004 07:17:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.004 07:17:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.004 ************************************ 00:08:19.004 START TEST accel_assign_opcode 00:08:19.004 ************************************ 00:08:19.004 07:17:40 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:08:19.004 07:17:40 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:19.004 07:17:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.004 07:17:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.004 [2024-11-28 07:17:40.971069] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:19.004 07:17:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.004 07:17:40 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:19.004 07:17:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.004 07:17:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.004 [2024-11-28 07:17:40.979051] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:19.004 07:17:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.004 07:17:40 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:19.004 07:17:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.004 07:17:40 -- common/autotest_common.sh@10 -- # set +x 00:08:19.004 07:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.004 07:17:41 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:19.004 07:17:41 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:19.004 07:17:41 -- accel/accel_rpc.sh@42 -- # grep software 00:08:19.004 07:17:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.004 07:17:41 -- common/autotest_common.sh@10 -- # set +x 00:08:19.262 07:17:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.262 software 00:08:19.262 ************************************ 00:08:19.262 END TEST accel_assign_opcode 00:08:19.262 ************************************ 00:08:19.262 00:08:19.262 real 0m0.353s 00:08:19.262 user 0m0.049s 00:08:19.262 sys 0m0.011s 00:08:19.262 07:17:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.262 07:17:41 -- common/autotest_common.sh@10 -- # set +x 00:08:19.262 07:17:41 -- accel/accel_rpc.sh@55 -- # killprocess 69486 00:08:19.262 07:17:41 -- common/autotest_common.sh@936 -- # '[' -z 69486 ']' 00:08:19.262 07:17:41 -- common/autotest_common.sh@940 -- # kill -0 69486 00:08:19.262 07:17:41 -- common/autotest_common.sh@941 -- # uname 00:08:19.263 07:17:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:19.263 07:17:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69486 00:08:19.263 killing process with pid 69486 00:08:19.263 07:17:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:19.263 07:17:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:19.263 07:17:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69486' 00:08:19.263 07:17:41 -- common/autotest_common.sh@955 -- # kill 69486 00:08:19.263 07:17:41 -- common/autotest_common.sh@960 -- # wait 69486 00:08:19.829 00:08:19.829 real 0m2.259s 00:08:19.829 user 0m2.376s 00:08:19.829 sys 0m0.513s 00:08:19.829 ************************************ 00:08:19.829 END TEST accel_rpc 00:08:19.829 ************************************ 00:08:19.829 07:17:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.829 07:17:41 -- common/autotest_common.sh@10 -- # set +x 00:08:19.829 07:17:41 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:19.829 07:17:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:19.829 07:17:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.830 07:17:41 -- common/autotest_common.sh@10 -- # set +x 00:08:19.830 ************************************ 00:08:19.830 START TEST app_cmdline 00:08:19.830 ************************************ 00:08:19.830 07:17:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:19.830 * Looking for test storage... 00:08:19.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:19.830 07:17:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:19.830 07:17:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:19.830 07:17:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:20.088 07:17:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:20.088 07:17:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:20.088 07:17:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:20.088 07:17:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:20.088 07:17:42 -- scripts/common.sh@335 -- # IFS=.-: 00:08:20.088 07:17:42 -- scripts/common.sh@335 -- # read -ra ver1 00:08:20.088 07:17:42 -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.088 07:17:42 -- scripts/common.sh@336 -- # read -ra ver2 00:08:20.088 07:17:42 -- scripts/common.sh@337 -- # local 'op=<' 00:08:20.088 07:17:42 -- scripts/common.sh@339 -- # ver1_l=2 00:08:20.088 07:17:42 -- scripts/common.sh@340 -- # ver2_l=1 00:08:20.088 07:17:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:20.088 07:17:42 -- scripts/common.sh@343 -- # case "$op" in 00:08:20.088 07:17:42 -- scripts/common.sh@344 -- # : 1 00:08:20.088 07:17:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:20.088 07:17:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.088 07:17:42 -- scripts/common.sh@364 -- # decimal 1 00:08:20.088 07:17:42 -- scripts/common.sh@352 -- # local d=1 00:08:20.088 07:17:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.088 07:17:42 -- scripts/common.sh@354 -- # echo 1 00:08:20.088 07:17:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:20.088 07:17:42 -- scripts/common.sh@365 -- # decimal 2 00:08:20.088 07:17:42 -- scripts/common.sh@352 -- # local d=2 00:08:20.088 07:17:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.088 07:17:42 -- scripts/common.sh@354 -- # echo 2 00:08:20.088 07:17:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:20.088 07:17:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:20.088 07:17:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:20.088 07:17:42 -- scripts/common.sh@367 -- # return 0 00:08:20.088 07:17:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.088 07:17:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:20.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.088 --rc genhtml_branch_coverage=1 00:08:20.088 --rc genhtml_function_coverage=1 00:08:20.088 --rc genhtml_legend=1 00:08:20.088 --rc geninfo_all_blocks=1 00:08:20.089 --rc geninfo_unexecuted_blocks=1 00:08:20.089 00:08:20.089 ' 00:08:20.089 07:17:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:20.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.089 --rc genhtml_branch_coverage=1 00:08:20.089 --rc genhtml_function_coverage=1 00:08:20.089 --rc genhtml_legend=1 00:08:20.089 --rc geninfo_all_blocks=1 00:08:20.089 --rc geninfo_unexecuted_blocks=1 00:08:20.089 00:08:20.089 ' 00:08:20.089 07:17:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:20.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.089 --rc genhtml_branch_coverage=1 00:08:20.089 --rc genhtml_function_coverage=1 00:08:20.089 --rc genhtml_legend=1 00:08:20.089 --rc geninfo_all_blocks=1 00:08:20.089 --rc geninfo_unexecuted_blocks=1 00:08:20.089 00:08:20.089 ' 00:08:20.089 07:17:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:20.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.089 --rc genhtml_branch_coverage=1 00:08:20.089 --rc genhtml_function_coverage=1 00:08:20.089 --rc genhtml_legend=1 00:08:20.089 --rc geninfo_all_blocks=1 00:08:20.089 --rc geninfo_unexecuted_blocks=1 00:08:20.089 00:08:20.089 ' 00:08:20.089 07:17:42 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:20.089 07:17:42 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69592 00:08:20.089 07:17:42 -- app/cmdline.sh@18 -- # waitforlisten 69592 00:08:20.089 07:17:42 -- common/autotest_common.sh@829 -- # '[' -z 69592 ']' 00:08:20.089 07:17:42 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:20.089 07:17:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.089 07:17:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:20.089 07:17:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.089 07:17:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:20.089 07:17:42 -- common/autotest_common.sh@10 -- # set +x 00:08:20.089 [2024-11-28 07:17:42.213231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:20.089 [2024-11-28 07:17:42.213861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69592 ] 00:08:20.089 [2024-11-28 07:17:42.350539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.347 [2024-11-28 07:17:42.460456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:20.347 [2024-11-28 07:17:42.460892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.282 07:17:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.282 07:17:43 -- common/autotest_common.sh@862 -- # return 0 00:08:21.282 07:17:43 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:21.541 { 00:08:21.541 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:08:21.541 "fields": { 00:08:21.541 "major": 24, 00:08:21.541 "minor": 1, 00:08:21.541 "patch": 1, 00:08:21.541 "suffix": "-pre", 00:08:21.541 "commit": "c13c99a5e" 00:08:21.541 } 00:08:21.541 } 00:08:21.541 07:17:43 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:21.541 07:17:43 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:21.541 07:17:43 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:21.541 07:17:43 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:21.541 07:17:43 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:21.541 07:17:43 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:21.541 07:17:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.541 07:17:43 -- common/autotest_common.sh@10 -- # set +x 00:08:21.541 07:17:43 -- app/cmdline.sh@26 -- # sort 00:08:21.541 07:17:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.541 07:17:43 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:21.541 07:17:43 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:21.541 07:17:43 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.541 07:17:43 -- common/autotest_common.sh@650 -- # local es=0 00:08:21.541 07:17:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.541 07:17:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.541 07:17:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.541 07:17:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.541 07:17:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.541 07:17:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.541 07:17:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:21.541 07:17:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:21.541 07:17:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:21.541 07:17:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:21.800 request: 00:08:21.800 { 00:08:21.800 "method": "env_dpdk_get_mem_stats", 00:08:21.800 "req_id": 1 00:08:21.800 } 00:08:21.800 Got JSON-RPC error response 00:08:21.800 response: 00:08:21.800 { 00:08:21.800 "code": -32601, 00:08:21.800 "message": "Method not found" 00:08:21.800 } 00:08:21.800 07:17:43 -- common/autotest_common.sh@653 -- # es=1 00:08:21.800 07:17:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.800 07:17:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:21.800 07:17:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.800 07:17:43 -- app/cmdline.sh@1 -- # killprocess 69592 00:08:21.800 07:17:43 -- common/autotest_common.sh@936 -- # '[' -z 69592 ']' 00:08:21.800 07:17:43 -- common/autotest_common.sh@940 -- # kill -0 69592 00:08:21.800 07:17:43 -- common/autotest_common.sh@941 -- # uname 00:08:21.800 07:17:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:21.800 07:17:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69592 00:08:21.800 killing process with pid 69592 00:08:21.800 07:17:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:21.800 07:17:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:21.800 07:17:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69592' 00:08:21.800 07:17:43 -- common/autotest_common.sh@955 -- # kill 69592 00:08:21.800 07:17:43 -- common/autotest_common.sh@960 -- # wait 69592 00:08:22.398 ************************************ 00:08:22.398 END TEST app_cmdline 00:08:22.398 ************************************ 00:08:22.398 00:08:22.398 real 0m2.412s 00:08:22.398 user 0m2.991s 00:08:22.398 sys 0m0.549s 00:08:22.398 07:17:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.398 07:17:44 -- common/autotest_common.sh@10 -- # set +x 00:08:22.398 07:17:44 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:22.398 07:17:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:22.398 07:17:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.398 07:17:44 -- common/autotest_common.sh@10 -- # set +x 00:08:22.398 ************************************ 00:08:22.398 START TEST version 00:08:22.398 ************************************ 00:08:22.398 07:17:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:22.398 * Looking for test storage... 00:08:22.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:22.398 07:17:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:22.398 07:17:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:22.398 07:17:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:22.398 07:17:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:22.398 07:17:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:22.398 07:17:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:22.398 07:17:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:22.398 07:17:44 -- scripts/common.sh@335 -- # IFS=.-: 00:08:22.398 07:17:44 -- scripts/common.sh@335 -- # read -ra ver1 00:08:22.398 07:17:44 -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.398 07:17:44 -- scripts/common.sh@336 -- # read -ra ver2 00:08:22.398 07:17:44 -- scripts/common.sh@337 -- # local 'op=<' 00:08:22.398 07:17:44 -- scripts/common.sh@339 -- # ver1_l=2 00:08:22.398 07:17:44 -- scripts/common.sh@340 -- # ver2_l=1 00:08:22.398 07:17:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:22.398 07:17:44 -- scripts/common.sh@343 -- # case "$op" in 00:08:22.398 07:17:44 -- scripts/common.sh@344 -- # : 1 00:08:22.398 07:17:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:22.398 07:17:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.398 07:17:44 -- scripts/common.sh@364 -- # decimal 1 00:08:22.398 07:17:44 -- scripts/common.sh@352 -- # local d=1 00:08:22.398 07:17:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.398 07:17:44 -- scripts/common.sh@354 -- # echo 1 00:08:22.398 07:17:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:22.398 07:17:44 -- scripts/common.sh@365 -- # decimal 2 00:08:22.398 07:17:44 -- scripts/common.sh@352 -- # local d=2 00:08:22.398 07:17:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.398 07:17:44 -- scripts/common.sh@354 -- # echo 2 00:08:22.398 07:17:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:22.398 07:17:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:22.398 07:17:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:22.398 07:17:44 -- scripts/common.sh@367 -- # return 0 00:08:22.398 07:17:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.398 07:17:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:22.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.398 --rc genhtml_branch_coverage=1 00:08:22.398 --rc genhtml_function_coverage=1 00:08:22.398 --rc genhtml_legend=1 00:08:22.398 --rc geninfo_all_blocks=1 00:08:22.398 --rc geninfo_unexecuted_blocks=1 00:08:22.398 00:08:22.398 ' 00:08:22.398 07:17:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:22.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.398 --rc genhtml_branch_coverage=1 00:08:22.398 --rc genhtml_function_coverage=1 00:08:22.398 --rc genhtml_legend=1 00:08:22.398 --rc geninfo_all_blocks=1 00:08:22.398 --rc geninfo_unexecuted_blocks=1 00:08:22.398 00:08:22.398 ' 00:08:22.398 07:17:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:22.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.398 --rc genhtml_branch_coverage=1 00:08:22.398 --rc genhtml_function_coverage=1 00:08:22.398 --rc genhtml_legend=1 00:08:22.398 --rc geninfo_all_blocks=1 00:08:22.398 --rc geninfo_unexecuted_blocks=1 00:08:22.398 00:08:22.398 ' 00:08:22.398 07:17:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:22.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.398 --rc genhtml_branch_coverage=1 00:08:22.398 --rc genhtml_function_coverage=1 00:08:22.398 --rc genhtml_legend=1 00:08:22.398 --rc geninfo_all_blocks=1 00:08:22.398 --rc geninfo_unexecuted_blocks=1 00:08:22.398 00:08:22.398 ' 00:08:22.398 07:17:44 -- app/version.sh@17 -- # get_header_version major 00:08:22.398 07:17:44 -- app/version.sh@14 -- # cut -f2 00:08:22.398 07:17:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:22.398 07:17:44 -- app/version.sh@14 -- # tr -d '"' 00:08:22.398 07:17:44 -- app/version.sh@17 -- # major=24 00:08:22.398 07:17:44 -- app/version.sh@18 -- # get_header_version minor 00:08:22.398 07:17:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:22.398 07:17:44 -- app/version.sh@14 -- # cut -f2 00:08:22.398 07:17:44 -- app/version.sh@14 -- # tr -d '"' 00:08:22.398 07:17:44 -- app/version.sh@18 -- # minor=1 00:08:22.398 07:17:44 -- app/version.sh@19 -- # get_header_version patch 00:08:22.398 07:17:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:22.398 07:17:44 -- app/version.sh@14 -- # cut -f2 00:08:22.398 07:17:44 -- app/version.sh@14 -- # tr -d '"' 00:08:22.398 07:17:44 -- app/version.sh@19 -- # patch=1 00:08:22.398 07:17:44 -- app/version.sh@20 -- # get_header_version suffix 00:08:22.398 07:17:44 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:22.398 07:17:44 -- app/version.sh@14 -- # cut -f2 00:08:22.398 07:17:44 -- app/version.sh@14 -- # tr -d '"' 00:08:22.398 07:17:44 -- app/version.sh@20 -- # suffix=-pre 00:08:22.398 07:17:44 -- app/version.sh@22 -- # version=24.1 00:08:22.398 07:17:44 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:22.398 07:17:44 -- app/version.sh@25 -- # version=24.1.1 00:08:22.398 07:17:44 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:22.398 07:17:44 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:22.398 07:17:44 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:22.657 07:17:44 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:22.657 07:17:44 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:22.657 00:08:22.657 real 0m0.287s 00:08:22.657 user 0m0.182s 00:08:22.657 sys 0m0.138s 00:08:22.657 ************************************ 00:08:22.657 END TEST version 00:08:22.657 ************************************ 00:08:22.657 07:17:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.657 07:17:44 -- common/autotest_common.sh@10 -- # set +x 00:08:22.657 07:17:44 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:22.657 07:17:44 -- spdk/autotest.sh@191 -- # uname -s 00:08:22.657 07:17:44 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:22.657 07:17:44 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:22.657 07:17:44 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:08:22.657 07:17:44 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:08:22.657 07:17:44 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:22.657 07:17:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:22.657 07:17:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.657 07:17:44 -- common/autotest_common.sh@10 -- # set +x 00:08:22.657 ************************************ 00:08:22.657 START TEST spdk_dd 00:08:22.657 ************************************ 00:08:22.657 07:17:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:22.657 * Looking for test storage... 00:08:22.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:22.657 07:17:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:22.657 07:17:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:22.657 07:17:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:22.657 07:17:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:22.657 07:17:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:22.657 07:17:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:22.657 07:17:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:22.657 07:17:44 -- scripts/common.sh@335 -- # IFS=.-: 00:08:22.657 07:17:44 -- scripts/common.sh@335 -- # read -ra ver1 00:08:22.657 07:17:44 -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.657 07:17:44 -- scripts/common.sh@336 -- # read -ra ver2 00:08:22.657 07:17:44 -- scripts/common.sh@337 -- # local 'op=<' 00:08:22.657 07:17:44 -- scripts/common.sh@339 -- # ver1_l=2 00:08:22.657 07:17:44 -- scripts/common.sh@340 -- # ver2_l=1 00:08:22.657 07:17:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:22.657 07:17:44 -- scripts/common.sh@343 -- # case "$op" in 00:08:22.657 07:17:44 -- scripts/common.sh@344 -- # : 1 00:08:22.657 07:17:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:22.657 07:17:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.917 07:17:44 -- scripts/common.sh@364 -- # decimal 1 00:08:22.917 07:17:44 -- scripts/common.sh@352 -- # local d=1 00:08:22.917 07:17:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.917 07:17:44 -- scripts/common.sh@354 -- # echo 1 00:08:22.917 07:17:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:22.917 07:17:44 -- scripts/common.sh@365 -- # decimal 2 00:08:22.917 07:17:44 -- scripts/common.sh@352 -- # local d=2 00:08:22.917 07:17:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.917 07:17:44 -- scripts/common.sh@354 -- # echo 2 00:08:22.917 07:17:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:22.917 07:17:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:22.917 07:17:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:22.917 07:17:44 -- scripts/common.sh@367 -- # return 0 00:08:22.917 07:17:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.917 07:17:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:22.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.917 --rc genhtml_branch_coverage=1 00:08:22.917 --rc genhtml_function_coverage=1 00:08:22.917 --rc genhtml_legend=1 00:08:22.917 --rc geninfo_all_blocks=1 00:08:22.917 --rc geninfo_unexecuted_blocks=1 00:08:22.917 00:08:22.917 ' 00:08:22.917 07:17:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:22.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.917 --rc genhtml_branch_coverage=1 00:08:22.917 --rc genhtml_function_coverage=1 00:08:22.917 --rc genhtml_legend=1 00:08:22.917 --rc geninfo_all_blocks=1 00:08:22.917 --rc geninfo_unexecuted_blocks=1 00:08:22.917 00:08:22.917 ' 00:08:22.917 07:17:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:22.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.917 --rc genhtml_branch_coverage=1 00:08:22.917 --rc genhtml_function_coverage=1 00:08:22.917 --rc genhtml_legend=1 00:08:22.917 --rc geninfo_all_blocks=1 00:08:22.917 --rc geninfo_unexecuted_blocks=1 00:08:22.917 00:08:22.917 ' 00:08:22.917 07:17:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:22.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.917 --rc genhtml_branch_coverage=1 00:08:22.917 --rc genhtml_function_coverage=1 00:08:22.917 --rc genhtml_legend=1 00:08:22.917 --rc geninfo_all_blocks=1 00:08:22.917 --rc geninfo_unexecuted_blocks=1 00:08:22.917 00:08:22.917 ' 00:08:22.917 07:17:44 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.917 07:17:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.917 07:17:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.917 07:17:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.917 07:17:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.917 07:17:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.917 07:17:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.917 07:17:44 -- paths/export.sh@5 -- # export PATH 00:08:22.917 07:17:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.917 07:17:44 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:23.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:23.177 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:23.177 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:23.177 07:17:45 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:23.177 07:17:45 -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:23.177 07:17:45 -- scripts/common.sh@311 -- # local bdf bdfs 00:08:23.177 07:17:45 -- scripts/common.sh@312 -- # local nvmes 00:08:23.177 07:17:45 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:08:23.177 07:17:45 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:23.177 07:17:45 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:08:23.177 07:17:45 -- scripts/common.sh@297 -- # local bdf= 00:08:23.177 07:17:45 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:08:23.177 07:17:45 -- scripts/common.sh@232 -- # local class 00:08:23.177 07:17:45 -- scripts/common.sh@233 -- # local subclass 00:08:23.177 07:17:45 -- scripts/common.sh@234 -- # local progif 00:08:23.177 07:17:45 -- scripts/common.sh@235 -- # printf %02x 1 00:08:23.177 07:17:45 -- scripts/common.sh@235 -- # class=01 00:08:23.177 07:17:45 -- scripts/common.sh@236 -- # printf %02x 8 00:08:23.177 07:17:45 -- scripts/common.sh@236 -- # subclass=08 00:08:23.177 07:17:45 -- scripts/common.sh@237 -- # printf %02x 2 00:08:23.177 07:17:45 -- scripts/common.sh@237 -- # progif=02 00:08:23.177 07:17:45 -- scripts/common.sh@239 -- # hash lspci 00:08:23.177 07:17:45 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:08:23.177 07:17:45 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:08:23.177 07:17:45 -- scripts/common.sh@242 -- # grep -i -- -p02 00:08:23.177 07:17:45 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:23.177 07:17:45 -- scripts/common.sh@244 -- # tr -d '"' 00:08:23.177 07:17:45 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:23.177 07:17:45 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:08:23.177 07:17:45 -- scripts/common.sh@15 -- # local i 00:08:23.177 07:17:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:08:23.177 07:17:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:23.177 07:17:45 -- scripts/common.sh@24 -- # return 0 00:08:23.177 07:17:45 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:08:23.177 07:17:45 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:23.177 07:17:45 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:08:23.177 07:17:45 -- scripts/common.sh@15 -- # local i 00:08:23.177 07:17:45 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:08:23.177 07:17:45 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:23.177 07:17:45 -- scripts/common.sh@24 -- # return 0 00:08:23.177 07:17:45 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:08:23.177 07:17:45 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:08:23.177 07:17:45 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:08:23.177 07:17:45 -- scripts/common.sh@322 -- # uname -s 00:08:23.177 07:17:45 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:08:23.177 07:17:45 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:08:23.177 07:17:45 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:08:23.177 07:17:45 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:08:23.177 07:17:45 -- scripts/common.sh@322 -- # uname -s 00:08:23.177 07:17:45 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:08:23.177 07:17:45 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:08:23.177 07:17:45 -- scripts/common.sh@327 -- # (( 2 )) 00:08:23.177 07:17:45 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:08:23.177 07:17:45 -- dd/dd.sh@13 -- # check_liburing 00:08:23.177 07:17:45 -- dd/common.sh@139 -- # local lib so 00:08:23.177 07:17:45 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:08:23.177 07:17:45 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:23.177 07:17:45 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:23.177 07:17:45 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:23.177 * spdk_dd linked to liburing 00:08:23.177 07:17:45 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:23.177 07:17:45 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:23.177 07:17:45 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:23.177 07:17:45 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:23.178 07:17:45 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:23.178 07:17:45 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:23.178 07:17:45 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:23.178 07:17:45 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:23.178 07:17:45 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:23.178 07:17:45 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:23.178 07:17:45 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:23.178 07:17:45 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:23.178 07:17:45 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:23.178 07:17:45 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:23.178 07:17:45 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:23.178 07:17:45 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:23.178 07:17:45 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:23.178 07:17:45 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:23.178 07:17:45 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:23.178 07:17:45 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:23.178 07:17:45 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:23.178 07:17:45 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:23.178 07:17:45 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:23.178 07:17:45 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:23.178 07:17:45 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:23.178 07:17:45 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:23.178 07:17:45 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:23.178 07:17:45 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:23.178 07:17:45 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:23.178 07:17:45 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:23.178 07:17:45 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:23.178 07:17:45 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:23.178 07:17:45 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:23.178 07:17:45 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:23.178 07:17:45 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:23.178 07:17:45 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:23.178 07:17:45 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:23.178 07:17:45 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:23.178 07:17:45 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:23.178 07:17:45 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:23.178 07:17:45 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:23.178 07:17:45 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:23.178 07:17:45 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:23.178 07:17:45 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:23.178 07:17:45 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:23.178 07:17:45 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:23.178 07:17:45 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:23.178 07:17:45 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:23.178 07:17:45 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:23.178 07:17:45 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:23.178 07:17:45 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:23.178 07:17:45 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:23.178 07:17:45 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:23.178 07:17:45 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:23.178 07:17:45 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:08:23.178 07:17:45 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:23.178 07:17:45 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:23.178 07:17:45 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:23.178 07:17:45 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:23.178 07:17:45 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:23.178 07:17:45 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:23.178 07:17:45 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:23.178 07:17:45 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:23.178 07:17:45 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:23.178 07:17:45 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:23.178 07:17:45 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:23.178 07:17:45 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:23.178 07:17:45 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:23.178 07:17:45 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:23.178 07:17:45 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:23.178 07:17:45 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:23.178 07:17:45 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:23.178 07:17:45 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:23.178 07:17:45 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:23.178 07:17:45 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:23.178 07:17:45 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:23.178 07:17:45 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:23.178 07:17:45 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:23.178 07:17:45 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:23.178 07:17:45 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:23.178 07:17:45 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:08:23.178 07:17:45 -- dd/common.sh@149 -- # [[ y != y ]] 00:08:23.178 07:17:45 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:08:23.178 07:17:45 -- dd/common.sh@156 -- # export liburing_in_use=1 00:08:23.178 07:17:45 -- dd/common.sh@156 -- # liburing_in_use=1 00:08:23.178 07:17:45 -- dd/common.sh@157 -- # return 0 00:08:23.178 07:17:45 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:23.178 07:17:45 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:08:23.178 07:17:45 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:23.178 07:17:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.178 07:17:45 -- common/autotest_common.sh@10 -- # set +x 00:08:23.178 ************************************ 00:08:23.178 START TEST spdk_dd_basic_rw 00:08:23.178 ************************************ 00:08:23.178 07:17:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:08:23.437 * Looking for test storage... 00:08:23.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:23.437 07:17:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:23.437 07:17:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:23.437 07:17:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:23.437 07:17:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:23.437 07:17:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:23.437 07:17:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:23.437 07:17:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:23.437 07:17:45 -- scripts/common.sh@335 -- # IFS=.-: 00:08:23.437 07:17:45 -- scripts/common.sh@335 -- # read -ra ver1 00:08:23.437 07:17:45 -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.437 07:17:45 -- scripts/common.sh@336 -- # read -ra ver2 00:08:23.437 07:17:45 -- scripts/common.sh@337 -- # local 'op=<' 00:08:23.437 07:17:45 -- scripts/common.sh@339 -- # ver1_l=2 00:08:23.437 07:17:45 -- scripts/common.sh@340 -- # ver2_l=1 00:08:23.437 07:17:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:23.437 07:17:45 -- scripts/common.sh@343 -- # case "$op" in 00:08:23.437 07:17:45 -- scripts/common.sh@344 -- # : 1 00:08:23.437 07:17:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:23.437 07:17:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.437 07:17:45 -- scripts/common.sh@364 -- # decimal 1 00:08:23.437 07:17:45 -- scripts/common.sh@352 -- # local d=1 00:08:23.437 07:17:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.437 07:17:45 -- scripts/common.sh@354 -- # echo 1 00:08:23.437 07:17:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:23.437 07:17:45 -- scripts/common.sh@365 -- # decimal 2 00:08:23.437 07:17:45 -- scripts/common.sh@352 -- # local d=2 00:08:23.437 07:17:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.437 07:17:45 -- scripts/common.sh@354 -- # echo 2 00:08:23.437 07:17:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:23.437 07:17:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:23.437 07:17:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:23.437 07:17:45 -- scripts/common.sh@367 -- # return 0 00:08:23.437 07:17:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.437 07:17:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:23.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.437 --rc genhtml_branch_coverage=1 00:08:23.437 --rc genhtml_function_coverage=1 00:08:23.437 --rc genhtml_legend=1 00:08:23.437 --rc geninfo_all_blocks=1 00:08:23.437 --rc geninfo_unexecuted_blocks=1 00:08:23.437 00:08:23.437 ' 00:08:23.437 07:17:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:23.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.437 --rc genhtml_branch_coverage=1 00:08:23.437 --rc genhtml_function_coverage=1 00:08:23.437 --rc genhtml_legend=1 00:08:23.437 --rc geninfo_all_blocks=1 00:08:23.437 --rc geninfo_unexecuted_blocks=1 00:08:23.437 00:08:23.437 ' 00:08:23.437 07:17:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:23.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.437 --rc genhtml_branch_coverage=1 00:08:23.437 --rc genhtml_function_coverage=1 00:08:23.437 --rc genhtml_legend=1 00:08:23.437 --rc geninfo_all_blocks=1 00:08:23.437 --rc geninfo_unexecuted_blocks=1 00:08:23.437 00:08:23.437 ' 00:08:23.437 07:17:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:23.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.437 --rc genhtml_branch_coverage=1 00:08:23.437 --rc genhtml_function_coverage=1 00:08:23.437 --rc genhtml_legend=1 00:08:23.437 --rc geninfo_all_blocks=1 00:08:23.437 --rc geninfo_unexecuted_blocks=1 00:08:23.437 00:08:23.437 ' 00:08:23.437 07:17:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:23.437 07:17:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.437 07:17:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.437 07:17:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.437 07:17:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.438 07:17:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.438 07:17:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.438 07:17:45 -- paths/export.sh@5 -- # export PATH 00:08:23.438 07:17:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.438 07:17:45 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:23.438 07:17:45 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:23.438 07:17:45 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:23.438 07:17:45 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:08:23.438 07:17:45 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:23.438 07:17:45 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:23.438 07:17:45 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:23.438 07:17:45 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.438 07:17:45 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.438 07:17:45 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:08:23.438 07:17:45 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:08:23.438 07:17:45 -- dd/common.sh@126 -- # mapfile -t id 00:08:23.438 07:17:45 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:08:23.699 07:17:45 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2190 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:23.699 07:17:45 -- dd/common.sh@130 -- # lbaf=04 00:08:23.700 07:17:45 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 9 Host Read Commands: 2190 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:23.700 07:17:45 -- dd/common.sh@132 -- # lbaf=4096 00:08:23.700 07:17:45 -- dd/common.sh@134 -- # echo 4096 00:08:23.700 07:17:45 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:23.700 07:17:45 -- dd/basic_rw.sh@96 -- # : 00:08:23.700 07:17:45 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:23.700 07:17:45 -- dd/basic_rw.sh@96 -- # gen_conf 00:08:23.700 07:17:45 -- dd/common.sh@31 -- # xtrace_disable 00:08:23.700 07:17:45 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:23.700 07:17:45 -- common/autotest_common.sh@10 -- # set +x 00:08:23.700 07:17:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.700 07:17:45 -- common/autotest_common.sh@10 -- # set +x 00:08:23.700 ************************************ 00:08:23.700 START TEST dd_bs_lt_native_bs 00:08:23.700 ************************************ 00:08:23.700 07:17:45 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:23.700 07:17:45 -- common/autotest_common.sh@650 -- # local es=0 00:08:23.700 07:17:45 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:23.700 07:17:45 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.700 07:17:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.700 07:17:45 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.700 07:17:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.700 07:17:45 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.700 07:17:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.700 07:17:45 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.700 07:17:45 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.700 07:17:45 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:23.700 { 00:08:23.700 "subsystems": [ 00:08:23.700 { 00:08:23.700 "subsystem": "bdev", 00:08:23.700 "config": [ 00:08:23.700 { 00:08:23.700 "params": { 00:08:23.700 "trtype": "pcie", 00:08:23.700 "traddr": "0000:00:06.0", 00:08:23.700 "name": "Nvme0" 00:08:23.700 }, 00:08:23.700 "method": "bdev_nvme_attach_controller" 00:08:23.700 }, 00:08:23.700 { 00:08:23.700 "method": "bdev_wait_for_examine" 00:08:23.700 } 00:08:23.700 ] 00:08:23.700 } 00:08:23.700 ] 00:08:23.700 } 00:08:23.700 [2024-11-28 07:17:45.906171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.700 [2024-11-28 07:17:45.906646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69940 ] 00:08:23.960 [2024-11-28 07:17:46.042983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.960 [2024-11-28 07:17:46.163448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.218 [2024-11-28 07:17:46.340616] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:24.218 [2024-11-28 07:17:46.340745] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.218 [2024-11-28 07:17:46.492273] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:24.477 07:17:46 -- common/autotest_common.sh@653 -- # es=234 00:08:24.477 07:17:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.477 07:17:46 -- common/autotest_common.sh@662 -- # es=106 00:08:24.477 07:17:46 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:24.477 07:17:46 -- common/autotest_common.sh@670 -- # es=1 00:08:24.477 07:17:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.477 00:08:24.477 real 0m0.751s 00:08:24.477 user 0m0.525s 00:08:24.477 sys 0m0.180s 00:08:24.477 07:17:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:24.477 07:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:24.477 ************************************ 00:08:24.477 END TEST dd_bs_lt_native_bs 00:08:24.477 ************************************ 00:08:24.477 07:17:46 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:24.477 07:17:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:24.477 07:17:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.477 07:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:24.477 ************************************ 00:08:24.477 START TEST dd_rw 00:08:24.477 ************************************ 00:08:24.477 07:17:46 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:08:24.477 07:17:46 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:24.477 07:17:46 -- dd/basic_rw.sh@12 -- # local count size 00:08:24.477 07:17:46 -- dd/basic_rw.sh@13 -- # local qds bss 00:08:24.477 07:17:46 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:24.477 07:17:46 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:24.477 07:17:46 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:24.477 07:17:46 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:24.477 07:17:46 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:24.477 07:17:46 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:24.477 07:17:46 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:24.477 07:17:46 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:24.477 07:17:46 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:24.477 07:17:46 -- dd/basic_rw.sh@23 -- # count=15 00:08:24.477 07:17:46 -- dd/basic_rw.sh@24 -- # count=15 00:08:24.477 07:17:46 -- dd/basic_rw.sh@25 -- # size=61440 00:08:24.477 07:17:46 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:24.477 07:17:46 -- dd/common.sh@98 -- # xtrace_disable 00:08:24.477 07:17:46 -- common/autotest_common.sh@10 -- # set +x 00:08:25.043 07:17:47 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:25.043 07:17:47 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:25.043 07:17:47 -- dd/common.sh@31 -- # xtrace_disable 00:08:25.043 07:17:47 -- common/autotest_common.sh@10 -- # set +x 00:08:25.043 [2024-11-28 07:17:47.310350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.043 [2024-11-28 07:17:47.310486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69978 ] 00:08:25.301 { 00:08:25.301 "subsystems": [ 00:08:25.301 { 00:08:25.301 "subsystem": "bdev", 00:08:25.301 "config": [ 00:08:25.301 { 00:08:25.301 "params": { 00:08:25.301 "trtype": "pcie", 00:08:25.301 "traddr": "0000:00:06.0", 00:08:25.301 "name": "Nvme0" 00:08:25.301 }, 00:08:25.301 "method": "bdev_nvme_attach_controller" 00:08:25.301 }, 00:08:25.301 { 00:08:25.301 "method": "bdev_wait_for_examine" 00:08:25.301 } 00:08:25.301 ] 00:08:25.301 } 00:08:25.301 ] 00:08:25.301 } 00:08:25.301 [2024-11-28 07:17:47.448982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.301 [2024-11-28 07:17:47.570958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.559  [2024-11-28T07:17:48.093Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:25.818 00:08:25.818 07:17:48 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:25.818 07:17:48 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:25.818 07:17:48 -- dd/common.sh@31 -- # xtrace_disable 00:08:25.818 07:17:48 -- common/autotest_common.sh@10 -- # set +x 00:08:25.818 [2024-11-28 07:17:48.071935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.818 [2024-11-28 07:17:48.072056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69990 ] 00:08:25.818 { 00:08:25.818 "subsystems": [ 00:08:25.818 { 00:08:25.818 "subsystem": "bdev", 00:08:25.818 "config": [ 00:08:25.818 { 00:08:25.818 "params": { 00:08:25.818 "trtype": "pcie", 00:08:25.818 "traddr": "0000:00:06.0", 00:08:25.818 "name": "Nvme0" 00:08:25.818 }, 00:08:25.818 "method": "bdev_nvme_attach_controller" 00:08:25.818 }, 00:08:25.818 { 00:08:25.818 "method": "bdev_wait_for_examine" 00:08:25.818 } 00:08:25.818 ] 00:08:25.818 } 00:08:25.818 ] 00:08:25.818 } 00:08:26.077 [2024-11-28 07:17:48.207801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.077 [2024-11-28 07:17:48.321635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.335  [2024-11-28T07:17:48.869Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:26.594 00:08:26.594 07:17:48 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.594 07:17:48 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:26.594 07:17:48 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:26.594 07:17:48 -- dd/common.sh@11 -- # local nvme_ref= 00:08:26.594 07:17:48 -- dd/common.sh@12 -- # local size=61440 00:08:26.594 07:17:48 -- dd/common.sh@14 -- # local bs=1048576 00:08:26.594 07:17:48 -- dd/common.sh@15 -- # local count=1 00:08:26.594 07:17:48 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:26.594 07:17:48 -- dd/common.sh@18 -- # gen_conf 00:08:26.594 07:17:48 -- dd/common.sh@31 -- # xtrace_disable 00:08:26.594 07:17:48 -- common/autotest_common.sh@10 -- # set +x 00:08:26.594 [2024-11-28 07:17:48.823792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:26.594 [2024-11-28 07:17:48.824523] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70004 ] 00:08:26.594 { 00:08:26.594 "subsystems": [ 00:08:26.594 { 00:08:26.594 "subsystem": "bdev", 00:08:26.594 "config": [ 00:08:26.594 { 00:08:26.594 "params": { 00:08:26.594 "trtype": "pcie", 00:08:26.594 "traddr": "0000:00:06.0", 00:08:26.594 "name": "Nvme0" 00:08:26.594 }, 00:08:26.594 "method": "bdev_nvme_attach_controller" 00:08:26.594 }, 00:08:26.594 { 00:08:26.594 "method": "bdev_wait_for_examine" 00:08:26.594 } 00:08:26.594 ] 00:08:26.594 } 00:08:26.594 ] 00:08:26.594 } 00:08:26.853 [2024-11-28 07:17:48.966682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.853 [2024-11-28 07:17:49.066500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.112  [2024-11-28T07:17:49.645Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:27.370 00:08:27.370 07:17:49 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:27.370 07:17:49 -- dd/basic_rw.sh@23 -- # count=15 00:08:27.370 07:17:49 -- dd/basic_rw.sh@24 -- # count=15 00:08:27.370 07:17:49 -- dd/basic_rw.sh@25 -- # size=61440 00:08:27.370 07:17:49 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:27.370 07:17:49 -- dd/common.sh@98 -- # xtrace_disable 00:08:27.370 07:17:49 -- common/autotest_common.sh@10 -- # set +x 00:08:27.937 07:17:50 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:27.938 07:17:50 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:27.938 07:17:50 -- dd/common.sh@31 -- # xtrace_disable 00:08:27.938 07:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:27.938 { 00:08:27.938 "subsystems": [ 00:08:27.938 { 00:08:27.938 "subsystem": "bdev", 00:08:27.938 "config": [ 00:08:27.938 { 00:08:27.938 "params": { 00:08:27.938 "trtype": "pcie", 00:08:27.938 "traddr": "0000:00:06.0", 00:08:27.938 "name": "Nvme0" 00:08:27.938 }, 00:08:27.938 "method": "bdev_nvme_attach_controller" 00:08:27.938 }, 00:08:27.938 { 00:08:27.938 "method": "bdev_wait_for_examine" 00:08:27.938 } 00:08:27.938 ] 00:08:27.938 } 00:08:27.938 ] 00:08:27.938 } 00:08:27.938 [2024-11-28 07:17:50.193127] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.938 [2024-11-28 07:17:50.193671] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70026 ] 00:08:28.198 [2024-11-28 07:17:50.337556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.198 [2024-11-28 07:17:50.435975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.456  [2024-11-28T07:17:50.990Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:28.715 00:08:28.715 07:17:50 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:28.715 07:17:50 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:28.715 07:17:50 -- dd/common.sh@31 -- # xtrace_disable 00:08:28.715 07:17:50 -- common/autotest_common.sh@10 -- # set +x 00:08:28.715 [2024-11-28 07:17:50.916408] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:28.715 [2024-11-28 07:17:50.916532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70040 ] 00:08:28.715 { 00:08:28.715 "subsystems": [ 00:08:28.715 { 00:08:28.715 "subsystem": "bdev", 00:08:28.715 "config": [ 00:08:28.715 { 00:08:28.715 "params": { 00:08:28.715 "trtype": "pcie", 00:08:28.715 "traddr": "0000:00:06.0", 00:08:28.715 "name": "Nvme0" 00:08:28.715 }, 00:08:28.715 "method": "bdev_nvme_attach_controller" 00:08:28.715 }, 00:08:28.715 { 00:08:28.715 "method": "bdev_wait_for_examine" 00:08:28.715 } 00:08:28.715 ] 00:08:28.715 } 00:08:28.715 ] 00:08:28.715 } 00:08:28.974 [2024-11-28 07:17:51.058720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.974 [2024-11-28 07:17:51.160890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.233  [2024-11-28T07:17:51.767Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:29.492 00:08:29.492 07:17:51 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:29.492 07:17:51 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:29.492 07:17:51 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:29.492 07:17:51 -- dd/common.sh@11 -- # local nvme_ref= 00:08:29.492 07:17:51 -- dd/common.sh@12 -- # local size=61440 00:08:29.492 07:17:51 -- dd/common.sh@14 -- # local bs=1048576 00:08:29.492 07:17:51 -- dd/common.sh@15 -- # local count=1 00:08:29.493 07:17:51 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:29.493 07:17:51 -- dd/common.sh@18 -- # gen_conf 00:08:29.493 07:17:51 -- dd/common.sh@31 -- # xtrace_disable 00:08:29.493 07:17:51 -- common/autotest_common.sh@10 -- # set +x 00:08:29.493 [2024-11-28 07:17:51.630611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:29.493 [2024-11-28 07:17:51.631228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70059 ] 00:08:29.493 { 00:08:29.493 "subsystems": [ 00:08:29.493 { 00:08:29.493 "subsystem": "bdev", 00:08:29.493 "config": [ 00:08:29.493 { 00:08:29.493 "params": { 00:08:29.493 "trtype": "pcie", 00:08:29.493 "traddr": "0000:00:06.0", 00:08:29.493 "name": "Nvme0" 00:08:29.493 }, 00:08:29.493 "method": "bdev_nvme_attach_controller" 00:08:29.493 }, 00:08:29.493 { 00:08:29.493 "method": "bdev_wait_for_examine" 00:08:29.493 } 00:08:29.493 ] 00:08:29.493 } 00:08:29.493 ] 00:08:29.493 } 00:08:29.752 [2024-11-28 07:17:51.773504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.752 [2024-11-28 07:17:51.876129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.011  [2024-11-28T07:17:52.544Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:30.269 00:08:30.269 07:17:52 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:30.269 07:17:52 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:30.269 07:17:52 -- dd/basic_rw.sh@23 -- # count=7 00:08:30.269 07:17:52 -- dd/basic_rw.sh@24 -- # count=7 00:08:30.270 07:17:52 -- dd/basic_rw.sh@25 -- # size=57344 00:08:30.270 07:17:52 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:30.270 07:17:52 -- dd/common.sh@98 -- # xtrace_disable 00:08:30.270 07:17:52 -- common/autotest_common.sh@10 -- # set +x 00:08:30.836 07:17:52 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:30.836 07:17:52 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:30.836 07:17:52 -- dd/common.sh@31 -- # xtrace_disable 00:08:30.836 07:17:52 -- common/autotest_common.sh@10 -- # set +x 00:08:30.836 [2024-11-28 07:17:52.968503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.836 [2024-11-28 07:17:52.968915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70077 ] 00:08:30.836 { 00:08:30.836 "subsystems": [ 00:08:30.836 { 00:08:30.836 "subsystem": "bdev", 00:08:30.836 "config": [ 00:08:30.836 { 00:08:30.836 "params": { 00:08:30.836 "trtype": "pcie", 00:08:30.836 "traddr": "0000:00:06.0", 00:08:30.836 "name": "Nvme0" 00:08:30.836 }, 00:08:30.836 "method": "bdev_nvme_attach_controller" 00:08:30.836 }, 00:08:30.836 { 00:08:30.836 "method": "bdev_wait_for_examine" 00:08:30.836 } 00:08:30.836 ] 00:08:30.836 } 00:08:30.836 ] 00:08:30.836 } 00:08:30.836 [2024-11-28 07:17:53.106502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.095 [2024-11-28 07:17:53.212563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.353  [2024-11-28T07:17:53.628Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:31.353 00:08:31.613 07:17:53 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:31.613 07:17:53 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:31.613 07:17:53 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.613 07:17:53 -- common/autotest_common.sh@10 -- # set +x 00:08:31.613 [2024-11-28 07:17:53.671009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:31.613 [2024-11-28 07:17:53.671487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70095 ] 00:08:31.613 { 00:08:31.613 "subsystems": [ 00:08:31.613 { 00:08:31.613 "subsystem": "bdev", 00:08:31.613 "config": [ 00:08:31.613 { 00:08:31.613 "params": { 00:08:31.613 "trtype": "pcie", 00:08:31.613 "traddr": "0000:00:06.0", 00:08:31.613 "name": "Nvme0" 00:08:31.613 }, 00:08:31.613 "method": "bdev_nvme_attach_controller" 00:08:31.613 }, 00:08:31.613 { 00:08:31.613 "method": "bdev_wait_for_examine" 00:08:31.613 } 00:08:31.613 ] 00:08:31.613 } 00:08:31.613 ] 00:08:31.613 } 00:08:31.613 [2024-11-28 07:17:53.809227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.873 [2024-11-28 07:17:53.907941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.873  [2024-11-28T07:17:54.408Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:32.133 00:08:32.133 07:17:54 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.133 07:17:54 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:32.133 07:17:54 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:32.133 07:17:54 -- dd/common.sh@11 -- # local nvme_ref= 00:08:32.133 07:17:54 -- dd/common.sh@12 -- # local size=57344 00:08:32.133 07:17:54 -- dd/common.sh@14 -- # local bs=1048576 00:08:32.133 07:17:54 -- dd/common.sh@15 -- # local count=1 00:08:32.133 07:17:54 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:32.133 07:17:54 -- dd/common.sh@18 -- # gen_conf 00:08:32.133 07:17:54 -- dd/common.sh@31 -- # xtrace_disable 00:08:32.133 07:17:54 -- common/autotest_common.sh@10 -- # set +x 00:08:32.133 [2024-11-28 07:17:54.383695] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.133 [2024-11-28 07:17:54.383789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70109 ] 00:08:32.133 { 00:08:32.133 "subsystems": [ 00:08:32.133 { 00:08:32.133 "subsystem": "bdev", 00:08:32.133 "config": [ 00:08:32.133 { 00:08:32.133 "params": { 00:08:32.133 "trtype": "pcie", 00:08:32.133 "traddr": "0000:00:06.0", 00:08:32.133 "name": "Nvme0" 00:08:32.133 }, 00:08:32.133 "method": "bdev_nvme_attach_controller" 00:08:32.133 }, 00:08:32.133 { 00:08:32.133 "method": "bdev_wait_for_examine" 00:08:32.133 } 00:08:32.133 ] 00:08:32.133 } 00:08:32.133 ] 00:08:32.133 } 00:08:32.393 [2024-11-28 07:17:54.521726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.393 [2024-11-28 07:17:54.623392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.652  [2024-11-28T07:17:55.186Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:32.911 00:08:32.911 07:17:55 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:32.911 07:17:55 -- dd/basic_rw.sh@23 -- # count=7 00:08:32.911 07:17:55 -- dd/basic_rw.sh@24 -- # count=7 00:08:32.911 07:17:55 -- dd/basic_rw.sh@25 -- # size=57344 00:08:32.911 07:17:55 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:32.911 07:17:55 -- dd/common.sh@98 -- # xtrace_disable 00:08:32.911 07:17:55 -- common/autotest_common.sh@10 -- # set +x 00:08:33.479 07:17:55 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:33.479 07:17:55 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:33.479 07:17:55 -- dd/common.sh@31 -- # xtrace_disable 00:08:33.479 07:17:55 -- common/autotest_common.sh@10 -- # set +x 00:08:33.479 [2024-11-28 07:17:55.656570] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:33.479 [2024-11-28 07:17:55.656972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70132 ] 00:08:33.479 { 00:08:33.479 "subsystems": [ 00:08:33.479 { 00:08:33.479 "subsystem": "bdev", 00:08:33.479 "config": [ 00:08:33.479 { 00:08:33.479 "params": { 00:08:33.479 "trtype": "pcie", 00:08:33.479 "traddr": "0000:00:06.0", 00:08:33.479 "name": "Nvme0" 00:08:33.479 }, 00:08:33.479 "method": "bdev_nvme_attach_controller" 00:08:33.479 }, 00:08:33.479 { 00:08:33.479 "method": "bdev_wait_for_examine" 00:08:33.479 } 00:08:33.479 ] 00:08:33.479 } 00:08:33.479 ] 00:08:33.479 } 00:08:33.739 [2024-11-28 07:17:55.791835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.739 [2024-11-28 07:17:55.893582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.998  [2024-11-28T07:17:56.532Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:34.257 00:08:34.257 07:17:56 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:34.257 07:17:56 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:34.257 07:17:56 -- dd/common.sh@31 -- # xtrace_disable 00:08:34.257 07:17:56 -- common/autotest_common.sh@10 -- # set +x 00:08:34.257 [2024-11-28 07:17:56.351031] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:34.257 [2024-11-28 07:17:56.351495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70145 ] 00:08:34.257 { 00:08:34.257 "subsystems": [ 00:08:34.257 { 00:08:34.257 "subsystem": "bdev", 00:08:34.257 "config": [ 00:08:34.257 { 00:08:34.257 "params": { 00:08:34.257 "trtype": "pcie", 00:08:34.257 "traddr": "0000:00:06.0", 00:08:34.257 "name": "Nvme0" 00:08:34.257 }, 00:08:34.257 "method": "bdev_nvme_attach_controller" 00:08:34.257 }, 00:08:34.257 { 00:08:34.257 "method": "bdev_wait_for_examine" 00:08:34.257 } 00:08:34.257 ] 00:08:34.257 } 00:08:34.257 ] 00:08:34.257 } 00:08:34.257 [2024-11-28 07:17:56.484640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.515 [2024-11-28 07:17:56.593513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.515  [2024-11-28T07:17:57.049Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:34.775 00:08:34.775 07:17:57 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:34.775 07:17:57 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:34.775 07:17:57 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:34.775 07:17:57 -- dd/common.sh@11 -- # local nvme_ref= 00:08:34.775 07:17:57 -- dd/common.sh@12 -- # local size=57344 00:08:34.775 07:17:57 -- dd/common.sh@14 -- # local bs=1048576 00:08:34.775 07:17:57 -- dd/common.sh@15 -- # local count=1 00:08:34.775 07:17:57 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:34.775 07:17:57 -- dd/common.sh@18 -- # gen_conf 00:08:34.775 07:17:57 -- dd/common.sh@31 -- # xtrace_disable 00:08:34.775 07:17:57 -- common/autotest_common.sh@10 -- # set +x 00:08:35.034 [2024-11-28 07:17:57.060788] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:35.034 [2024-11-28 07:17:57.060919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70158 ] 00:08:35.034 { 00:08:35.034 "subsystems": [ 00:08:35.034 { 00:08:35.034 "subsystem": "bdev", 00:08:35.034 "config": [ 00:08:35.034 { 00:08:35.034 "params": { 00:08:35.034 "trtype": "pcie", 00:08:35.034 "traddr": "0000:00:06.0", 00:08:35.034 "name": "Nvme0" 00:08:35.034 }, 00:08:35.034 "method": "bdev_nvme_attach_controller" 00:08:35.034 }, 00:08:35.034 { 00:08:35.034 "method": "bdev_wait_for_examine" 00:08:35.034 } 00:08:35.034 ] 00:08:35.034 } 00:08:35.034 ] 00:08:35.034 } 00:08:35.034 [2024-11-28 07:17:57.202535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.034 [2024-11-28 07:17:57.294652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.293  [2024-11-28T07:17:57.827Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:35.552 00:08:35.552 07:17:57 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:35.552 07:17:57 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:35.552 07:17:57 -- dd/basic_rw.sh@23 -- # count=3 00:08:35.552 07:17:57 -- dd/basic_rw.sh@24 -- # count=3 00:08:35.552 07:17:57 -- dd/basic_rw.sh@25 -- # size=49152 00:08:35.552 07:17:57 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:35.552 07:17:57 -- dd/common.sh@98 -- # xtrace_disable 00:08:35.552 07:17:57 -- common/autotest_common.sh@10 -- # set +x 00:08:36.120 07:17:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:36.120 07:17:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:36.120 07:17:58 -- dd/common.sh@31 -- # xtrace_disable 00:08:36.120 07:17:58 -- common/autotest_common.sh@10 -- # set +x 00:08:36.120 [2024-11-28 07:17:58.177061] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.120 [2024-11-28 07:17:58.177182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70176 ] 00:08:36.120 { 00:08:36.120 "subsystems": [ 00:08:36.120 { 00:08:36.120 "subsystem": "bdev", 00:08:36.120 "config": [ 00:08:36.120 { 00:08:36.120 "params": { 00:08:36.120 "trtype": "pcie", 00:08:36.120 "traddr": "0000:00:06.0", 00:08:36.120 "name": "Nvme0" 00:08:36.120 }, 00:08:36.120 "method": "bdev_nvme_attach_controller" 00:08:36.120 }, 00:08:36.120 { 00:08:36.120 "method": "bdev_wait_for_examine" 00:08:36.120 } 00:08:36.120 ] 00:08:36.120 } 00:08:36.120 ] 00:08:36.120 } 00:08:36.120 [2024-11-28 07:17:58.321060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.378 [2024-11-28 07:17:58.417342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.378  [2024-11-28T07:17:58.910Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:36.635 00:08:36.635 07:17:58 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:36.635 07:17:58 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:36.635 07:17:58 -- dd/common.sh@31 -- # xtrace_disable 00:08:36.635 07:17:58 -- common/autotest_common.sh@10 -- # set +x 00:08:36.635 { 00:08:36.635 "subsystems": [ 00:08:36.635 { 00:08:36.635 "subsystem": "bdev", 00:08:36.635 "config": [ 00:08:36.635 { 00:08:36.635 "params": { 00:08:36.635 "trtype": "pcie", 00:08:36.635 "traddr": "0000:00:06.0", 00:08:36.635 "name": "Nvme0" 00:08:36.635 }, 00:08:36.635 "method": "bdev_nvme_attach_controller" 00:08:36.635 }, 00:08:36.635 { 00:08:36.635 "method": "bdev_wait_for_examine" 00:08:36.635 } 00:08:36.635 ] 00:08:36.635 } 00:08:36.635 ] 00:08:36.635 } 00:08:36.635 [2024-11-28 07:17:58.860258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:36.635 [2024-11-28 07:17:58.860387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70194 ] 00:08:36.894 [2024-11-28 07:17:59.001489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.894 [2024-11-28 07:17:59.093025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.151  [2024-11-28T07:17:59.685Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:37.410 00:08:37.410 07:17:59 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:37.410 07:17:59 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:37.410 07:17:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:37.410 07:17:59 -- dd/common.sh@11 -- # local nvme_ref= 00:08:37.410 07:17:59 -- dd/common.sh@12 -- # local size=49152 00:08:37.410 07:17:59 -- dd/common.sh@14 -- # local bs=1048576 00:08:37.410 07:17:59 -- dd/common.sh@15 -- # local count=1 00:08:37.410 07:17:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:37.410 07:17:59 -- dd/common.sh@18 -- # gen_conf 00:08:37.410 07:17:59 -- dd/common.sh@31 -- # xtrace_disable 00:08:37.410 07:17:59 -- common/autotest_common.sh@10 -- # set +x 00:08:37.410 { 00:08:37.410 "subsystems": [ 00:08:37.410 { 00:08:37.410 "subsystem": "bdev", 00:08:37.410 "config": [ 00:08:37.410 { 00:08:37.410 "params": { 00:08:37.410 "trtype": "pcie", 00:08:37.410 "traddr": "0000:00:06.0", 00:08:37.410 "name": "Nvme0" 00:08:37.410 }, 00:08:37.410 "method": "bdev_nvme_attach_controller" 00:08:37.410 }, 00:08:37.410 { 00:08:37.410 "method": "bdev_wait_for_examine" 00:08:37.410 } 00:08:37.410 ] 00:08:37.410 } 00:08:37.410 ] 00:08:37.410 } 00:08:37.410 [2024-11-28 07:17:59.556996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:37.410 [2024-11-28 07:17:59.557138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70208 ] 00:08:37.668 [2024-11-28 07:17:59.701772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.668 [2024-11-28 07:17:59.795640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.926  [2024-11-28T07:18:00.201Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:37.926 00:08:37.926 07:18:00 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:37.926 07:18:00 -- dd/basic_rw.sh@23 -- # count=3 00:08:37.926 07:18:00 -- dd/basic_rw.sh@24 -- # count=3 00:08:37.926 07:18:00 -- dd/basic_rw.sh@25 -- # size=49152 00:08:37.926 07:18:00 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:37.926 07:18:00 -- dd/common.sh@98 -- # xtrace_disable 00:08:37.926 07:18:00 -- common/autotest_common.sh@10 -- # set +x 00:08:38.493 07:18:00 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:38.493 07:18:00 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:38.493 07:18:00 -- dd/common.sh@31 -- # xtrace_disable 00:08:38.493 07:18:00 -- common/autotest_common.sh@10 -- # set +x 00:08:38.493 [2024-11-28 07:18:00.647951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:38.493 [2024-11-28 07:18:00.648081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70226 ] 00:08:38.493 { 00:08:38.493 "subsystems": [ 00:08:38.493 { 00:08:38.493 "subsystem": "bdev", 00:08:38.493 "config": [ 00:08:38.493 { 00:08:38.493 "params": { 00:08:38.493 "trtype": "pcie", 00:08:38.493 "traddr": "0000:00:06.0", 00:08:38.493 "name": "Nvme0" 00:08:38.493 }, 00:08:38.493 "method": "bdev_nvme_attach_controller" 00:08:38.493 }, 00:08:38.493 { 00:08:38.493 "method": "bdev_wait_for_examine" 00:08:38.493 } 00:08:38.493 ] 00:08:38.493 } 00:08:38.493 ] 00:08:38.493 } 00:08:38.752 [2024-11-28 07:18:00.787893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.752 [2024-11-28 07:18:00.876025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.752  [2024-11-28T07:18:01.286Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:39.011 00:08:39.011 07:18:01 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:39.011 07:18:01 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:39.011 07:18:01 -- dd/common.sh@31 -- # xtrace_disable 00:08:39.011 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:08:39.269 [2024-11-28 07:18:01.297687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.269 [2024-11-28 07:18:01.297831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70238 ] 00:08:39.269 { 00:08:39.269 "subsystems": [ 00:08:39.269 { 00:08:39.269 "subsystem": "bdev", 00:08:39.269 "config": [ 00:08:39.269 { 00:08:39.269 "params": { 00:08:39.269 "trtype": "pcie", 00:08:39.269 "traddr": "0000:00:06.0", 00:08:39.269 "name": "Nvme0" 00:08:39.269 }, 00:08:39.269 "method": "bdev_nvme_attach_controller" 00:08:39.269 }, 00:08:39.269 { 00:08:39.269 "method": "bdev_wait_for_examine" 00:08:39.269 } 00:08:39.269 ] 00:08:39.269 } 00:08:39.269 ] 00:08:39.269 } 00:08:39.269 [2024-11-28 07:18:01.438001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.269 [2024-11-28 07:18:01.526926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.527  [2024-11-28T07:18:02.060Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:39.785 00:08:39.785 07:18:01 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:39.785 07:18:01 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:39.785 07:18:01 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:39.785 07:18:01 -- dd/common.sh@11 -- # local nvme_ref= 00:08:39.785 07:18:01 -- dd/common.sh@12 -- # local size=49152 00:08:39.785 07:18:01 -- dd/common.sh@14 -- # local bs=1048576 00:08:39.785 07:18:01 -- dd/common.sh@15 -- # local count=1 00:08:39.785 07:18:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:39.785 07:18:01 -- dd/common.sh@18 -- # gen_conf 00:08:39.785 07:18:01 -- dd/common.sh@31 -- # xtrace_disable 00:08:39.786 07:18:01 -- common/autotest_common.sh@10 -- # set +x 00:08:39.786 [2024-11-28 07:18:01.943637] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:39.786 [2024-11-28 07:18:01.943775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70257 ] 00:08:39.786 { 00:08:39.786 "subsystems": [ 00:08:39.786 { 00:08:39.786 "subsystem": "bdev", 00:08:39.786 "config": [ 00:08:39.786 { 00:08:39.786 "params": { 00:08:39.786 "trtype": "pcie", 00:08:39.786 "traddr": "0000:00:06.0", 00:08:39.786 "name": "Nvme0" 00:08:39.786 }, 00:08:39.786 "method": "bdev_nvme_attach_controller" 00:08:39.786 }, 00:08:39.786 { 00:08:39.786 "method": "bdev_wait_for_examine" 00:08:39.786 } 00:08:39.786 ] 00:08:39.786 } 00:08:39.786 ] 00:08:39.786 } 00:08:40.043 [2024-11-28 07:18:02.084308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.043 [2024-11-28 07:18:02.165584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.043  [2024-11-28T07:18:02.577Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:40.302 00:08:40.302 00:08:40.302 real 0m15.878s 00:08:40.302 user 0m11.492s 00:08:40.302 sys 0m3.235s 00:08:40.302 07:18:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:40.302 07:18:02 -- common/autotest_common.sh@10 -- # set +x 00:08:40.302 ************************************ 00:08:40.302 END TEST dd_rw 00:08:40.302 ************************************ 00:08:40.302 07:18:02 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:40.302 07:18:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:40.302 07:18:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.302 07:18:02 -- common/autotest_common.sh@10 -- # set +x 00:08:40.561 ************************************ 00:08:40.561 START TEST dd_rw_offset 00:08:40.561 ************************************ 00:08:40.561 07:18:02 -- common/autotest_common.sh@1114 -- # basic_offset 00:08:40.561 07:18:02 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:40.561 07:18:02 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:40.561 07:18:02 -- dd/common.sh@98 -- # xtrace_disable 00:08:40.561 07:18:02 -- common/autotest_common.sh@10 -- # set +x 00:08:40.561 07:18:02 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:40.561 07:18:02 -- dd/basic_rw.sh@56 -- # data=pw6lndoa0rgzat63fgdo69ygv2sd0q7nxhf5onqrebu5o9205kfq75bqsesyuldhht96ijsue70uxucl0kxj25uw6vrbwf4k0lqux9n9zoavr1iepfm0zqv757nymazeusf0xn2guxq3ddnyg2ou6b0xps09yta7idqxkna96s7qpacfftk2z8txn5vbqmqur35psiksccq87spcrf7i1gagrzvbddcyvvqx38jhatlhptptgt8w42p29wx67dud52riwomq1e3lzaxhh778443oc36gx96da0drsi2627727scrdqz5n9w3dsnfv2yrk3ua46vxnj6piixx0v5bylzmeq1gp1x5dguag23wjt5cjg3fdtzp4q49da7h5xs21c4vskfjy3kiq8120j122ksxco3125440n134dcrf4lezd76zm5hu2uu12cgm63220nxz309l65bfsj4hwa1qix2uf112b02mri176lzglzs9q4lh1iutbug9537ynr5goif946fjxdwjxgjsj6d4fc0b1kkjzbntncwzti6tm1vpkjiy6g3qxaka605w6mk9bo85kfzvtch3rv9aoiumvwkbzhi0hbaohlu1w5wuyocp0apo0uzpg9bcv8wdrieb6qzt63pidsoyzv9rbp9i9d4zcx8isvn9i0gaylelz1a8d0b6w5axl0lxebxplofcmij1r4emi0erdgh47p93rc9ek75th4oezx17gfry836snt0zjedli3ihjkq1u4z95r0584lbplvfu4htjaux3nor6dh6yretrdo9sy4yfr5f9gcjcqwq9e2zio9o2a7ee3o3pk9f74au1a5va3eq1gqejt53aa3w7myupu37kueuz7smsuiem2lp95wj366uvqfw51nz8i3djmrvy1ho9utfyx9se8u5fvnlq734c4l7b5ayrlv49uk0oztrzifnmibts9yemqg8tadfb83tk2tz84ielorqhglwv0bdl62w3mlvbx600k6ycscceakwzje68hggabzvc3hkbj8hx92rk5v7pfbp0lmnllgfl4k7mxygxpje36ahxnmh626nxjulurmofz7ob42tgqa5ouqanv50mm48j27vr2xm5wq6lpul9ehmyi5widtql92j3nzb48va38f5dnoezqe8wl39vygiswnof6b7qn32ax84d6ktl82ln16huftjbp2igpw0qyavdyj4glxcgb4a1iigtb9jgw2kpwe7511jny2xn403s66kfiivoaqzgnd218xbju8ge0rkxqm7scz8qon1mxvhf6j8is9n0tzvileymco079amnurw803wiyiebjslhpqf3cqp0c5zhaljexq5jhqh6lbmjems4troph29rdegesgemzyltaqiykr5g6h2lv6eqq9srtjcc1beej2z8xis9ikg60ofncuafboe14w9i7jlikpfn6c9ly5sh4if70lrz9dg4p741m90uxci6xu0v2xv8o2e5mpfm550ymc7a2pb0rcevv93lff9jecel2qkytf15x3e8slksprj1lreot6r7zy4uduwak8f0z9kn9e09f8t1fegz4n5u63wcx03svp6jzzh01cam20tml2zckw2pt8er2k6bcqtizpeleu9ea886yv6wiks9z8kv627pndxdv4gjowx0c99szqk2vifzsni6yhmio2gaq2t481aaa866qykwi7hz1nfbkvjy40s06gm83xksotfk7tdph4dtyasdrs40ibu2wi3m7wm6j4elfaiz8q6tk04c2a02qx345v0xz8xhlm57v0xsz6qchiigw82v0in346b2re0lk12fj9unctxfdwfc6hk7y06hofpalm5fdy0y4wgqsbddxbkq49h101drwj7cu5ootnl18rqvmbjedclk0p6m9opacubyui5578hm7x69c7o5munylnudm2d09bgxxcq3wt534q9n72m5p6mvbdijmz1d856pk2odo8kqh85oeiq4bhzoemckizb7jpryp1lluce5eit4qxeilsgf8p5cu2sa91plva05tnbbzx361hgjf14y3id47q5ti31x3pgr8c1en5kp0trwqvr13gpy1v0525bz2m6t3cj8yjt62zanfc376eba18ws13uho5rvjc3bngh21m0g71ixsqrmxhkqix7bu21nv1ljk10kq86sqv7o8w2ppzssm2ju852k4kvqi9xvmentg6lpjp1hch9ptr97kwzv4ov4l7hza1eyrkpzh6vlgb4xcx2i8wyaw0oo23ztzngycjnvvtmuouyadi63gwj1gnl0v2jtnjec3w9v24whmlhpplg8no0s8jbcibcgzia6i7197yhkcxr3f4gaug5erb6wh1tbvhgxfs8jcdiuxp9wk8auk52b8cqy2dpq6bgy0zi7dezeql1ha4uby6bicxt9893wulbe0ndjwej8jrfbyxhtledu1d44s7g3bp3gzq2w6qv4jvpym5jxbiqqcqr8l7dxae3wy0xm4gk707ncsryn5me3q4rwrd1aldxoq7ftbyvuc6wmeychuoptvbbme4k5om7p45oew19gjczct3cfpsy48ihis0f7ka5waa0t8ak5wdd06dwu2rr6akj9n82g33fi8h3nhkmoaszv3x1lsghpuy6iumuiiaut07bnwpo93l9d1eluv4e1wk2uhc28p6le5joplh4m3zlbcf2gr6kg82d2pvzu35h66hnlq082x11c6kszhudma2xolmlw44qjiscnqdex6xupllp6mia14eqls5x6u9au3by5ak6zq77jocx24m657ozojwsovbwgdbsn7y216q3ln48v5kq7lr9x79o1j56yfl82mnhgf1bdtk6xgsb17oo4eecrx8x61cns7ckk4a84p7n91cwauawfixslbkg9s8xhmakhay6nn0gdxzowb47wgubm1273bb969fsn0yi00xuvm5sip997ys0pyqygh7xv8vlpqnys298hapaa3n35dtf07y1086c9ignwy1yqv1haeh7i60gtjwruupg4sa13c0u2r7hejtpjvfx622tglcuoi5bzuv6w7nvn2k27298pcxclm1ilbt3581foeg9k2vj4s52uopk2ptktj5551kdwykg5jlyabio7a1ge10qknc01pc4c2862byc4ylwvguduomd68xdncxrenk5pwwrrkj9dea9dszxw5ca1ndzt3ukcjvcav58ins9netbcif58q87usnwca5adqq68o9ejpxcyugpsfwq3h0zty30mfwudkdtoci9xyp82j5dcchrxiksj5vhyudzu76m663roo8q3hcr3sgohwc8vl6wygmhjk0pa705mdy4flt1bf4v4ea2kk866k9g9no8mylcwd57uezxznxyk6dwk7veur5cm6vsnvp23rm6nna9xasw6g50v0ltl4suenvo1nzd8fdi52x6wpv09pqjn56v622feiud60hl8m1mupui7kbpf344m65fgqtfl2dvbcmtj84atq8fhm3707257qj32t5v6ugywc1owo7e28serorb28xqiunxpn4wh7syl75c361xpjj39tprzkip2e5hcl98icyd09798h8jzr6y30nqc22pvvu5s9srz7ra7thlfdk9ox8qfuunqqz133ylykdorg4ta7qb4zzz5lus6lo6pbk3j6tob5ykg8vmbyp9hxpyrsiu9el7zj26wtj2g4gbe36m4qtnvx48zimx2k699t2wive9wozb3xxha04hzd3zyx0rtzoxc98ybb3ljbxltb135fhnx2erdsy0ktmiaks8xq2u3iy6nmgczzr1p5zenjvmful4wpgc3u1u8mlueobnuep6qts2nq8strow9h4qqdoqg81rxpak7eiaxk1xtlgdip2rf212almfenbwtl9uldpktuqosri9ri2nr2p4x8fjg218oo2cr27ugvv52il9y6ufff6n29mwuvloddss49clms703y35ispfepg8o9nxf7fizihx3qyki1hbc1z8945rj37bd6ds0zjywgyw0xit05y 00:08:40.561 07:18:02 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:40.561 07:18:02 -- dd/basic_rw.sh@59 -- # gen_conf 00:08:40.561 07:18:02 -- dd/common.sh@31 -- # xtrace_disable 00:08:40.561 07:18:02 -- common/autotest_common.sh@10 -- # set +x 00:08:40.561 [2024-11-28 07:18:02.676290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:40.561 [2024-11-28 07:18:02.676399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70287 ] 00:08:40.561 { 00:08:40.561 "subsystems": [ 00:08:40.561 { 00:08:40.561 "subsystem": "bdev", 00:08:40.561 "config": [ 00:08:40.561 { 00:08:40.561 "params": { 00:08:40.561 "trtype": "pcie", 00:08:40.561 "traddr": "0000:00:06.0", 00:08:40.561 "name": "Nvme0" 00:08:40.561 }, 00:08:40.561 "method": "bdev_nvme_attach_controller" 00:08:40.561 }, 00:08:40.561 { 00:08:40.561 "method": "bdev_wait_for_examine" 00:08:40.561 } 00:08:40.561 ] 00:08:40.561 } 00:08:40.561 ] 00:08:40.561 } 00:08:40.562 [2024-11-28 07:18:02.816074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.820 [2024-11-28 07:18:02.902251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.820  [2024-11-28T07:18:03.354Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:41.079 00:08:41.079 07:18:03 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:41.079 07:18:03 -- dd/basic_rw.sh@65 -- # gen_conf 00:08:41.079 07:18:03 -- dd/common.sh@31 -- # xtrace_disable 00:08:41.079 07:18:03 -- common/autotest_common.sh@10 -- # set +x 00:08:41.079 [2024-11-28 07:18:03.316060] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:41.079 [2024-11-28 07:18:03.316165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70299 ] 00:08:41.079 { 00:08:41.079 "subsystems": [ 00:08:41.079 { 00:08:41.079 "subsystem": "bdev", 00:08:41.079 "config": [ 00:08:41.079 { 00:08:41.079 "params": { 00:08:41.079 "trtype": "pcie", 00:08:41.079 "traddr": "0000:00:06.0", 00:08:41.079 "name": "Nvme0" 00:08:41.079 }, 00:08:41.079 "method": "bdev_nvme_attach_controller" 00:08:41.079 }, 00:08:41.079 { 00:08:41.079 "method": "bdev_wait_for_examine" 00:08:41.079 } 00:08:41.079 ] 00:08:41.079 } 00:08:41.079 ] 00:08:41.079 } 00:08:41.337 [2024-11-28 07:18:03.456476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.337 [2024-11-28 07:18:03.543535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.596  [2024-11-28T07:18:04.130Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:41.855 00:08:41.855 07:18:03 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:41.856 07:18:03 -- dd/basic_rw.sh@72 -- # [[ pw6lndoa0rgzat63fgdo69ygv2sd0q7nxhf5onqrebu5o9205kfq75bqsesyuldhht96ijsue70uxucl0kxj25uw6vrbwf4k0lqux9n9zoavr1iepfm0zqv757nymazeusf0xn2guxq3ddnyg2ou6b0xps09yta7idqxkna96s7qpacfftk2z8txn5vbqmqur35psiksccq87spcrf7i1gagrzvbddcyvvqx38jhatlhptptgt8w42p29wx67dud52riwomq1e3lzaxhh778443oc36gx96da0drsi2627727scrdqz5n9w3dsnfv2yrk3ua46vxnj6piixx0v5bylzmeq1gp1x5dguag23wjt5cjg3fdtzp4q49da7h5xs21c4vskfjy3kiq8120j122ksxco3125440n134dcrf4lezd76zm5hu2uu12cgm63220nxz309l65bfsj4hwa1qix2uf112b02mri176lzglzs9q4lh1iutbug9537ynr5goif946fjxdwjxgjsj6d4fc0b1kkjzbntncwzti6tm1vpkjiy6g3qxaka605w6mk9bo85kfzvtch3rv9aoiumvwkbzhi0hbaohlu1w5wuyocp0apo0uzpg9bcv8wdrieb6qzt63pidsoyzv9rbp9i9d4zcx8isvn9i0gaylelz1a8d0b6w5axl0lxebxplofcmij1r4emi0erdgh47p93rc9ek75th4oezx17gfry836snt0zjedli3ihjkq1u4z95r0584lbplvfu4htjaux3nor6dh6yretrdo9sy4yfr5f9gcjcqwq9e2zio9o2a7ee3o3pk9f74au1a5va3eq1gqejt53aa3w7myupu37kueuz7smsuiem2lp95wj366uvqfw51nz8i3djmrvy1ho9utfyx9se8u5fvnlq734c4l7b5ayrlv49uk0oztrzifnmibts9yemqg8tadfb83tk2tz84ielorqhglwv0bdl62w3mlvbx600k6ycscceakwzje68hggabzvc3hkbj8hx92rk5v7pfbp0lmnllgfl4k7mxygxpje36ahxnmh626nxjulurmofz7ob42tgqa5ouqanv50mm48j27vr2xm5wq6lpul9ehmyi5widtql92j3nzb48va38f5dnoezqe8wl39vygiswnof6b7qn32ax84d6ktl82ln16huftjbp2igpw0qyavdyj4glxcgb4a1iigtb9jgw2kpwe7511jny2xn403s66kfiivoaqzgnd218xbju8ge0rkxqm7scz8qon1mxvhf6j8is9n0tzvileymco079amnurw803wiyiebjslhpqf3cqp0c5zhaljexq5jhqh6lbmjems4troph29rdegesgemzyltaqiykr5g6h2lv6eqq9srtjcc1beej2z8xis9ikg60ofncuafboe14w9i7jlikpfn6c9ly5sh4if70lrz9dg4p741m90uxci6xu0v2xv8o2e5mpfm550ymc7a2pb0rcevv93lff9jecel2qkytf15x3e8slksprj1lreot6r7zy4uduwak8f0z9kn9e09f8t1fegz4n5u63wcx03svp6jzzh01cam20tml2zckw2pt8er2k6bcqtizpeleu9ea886yv6wiks9z8kv627pndxdv4gjowx0c99szqk2vifzsni6yhmio2gaq2t481aaa866qykwi7hz1nfbkvjy40s06gm83xksotfk7tdph4dtyasdrs40ibu2wi3m7wm6j4elfaiz8q6tk04c2a02qx345v0xz8xhlm57v0xsz6qchiigw82v0in346b2re0lk12fj9unctxfdwfc6hk7y06hofpalm5fdy0y4wgqsbddxbkq49h101drwj7cu5ootnl18rqvmbjedclk0p6m9opacubyui5578hm7x69c7o5munylnudm2d09bgxxcq3wt534q9n72m5p6mvbdijmz1d856pk2odo8kqh85oeiq4bhzoemckizb7jpryp1lluce5eit4qxeilsgf8p5cu2sa91plva05tnbbzx361hgjf14y3id47q5ti31x3pgr8c1en5kp0trwqvr13gpy1v0525bz2m6t3cj8yjt62zanfc376eba18ws13uho5rvjc3bngh21m0g71ixsqrmxhkqix7bu21nv1ljk10kq86sqv7o8w2ppzssm2ju852k4kvqi9xvmentg6lpjp1hch9ptr97kwzv4ov4l7hza1eyrkpzh6vlgb4xcx2i8wyaw0oo23ztzngycjnvvtmuouyadi63gwj1gnl0v2jtnjec3w9v24whmlhpplg8no0s8jbcibcgzia6i7197yhkcxr3f4gaug5erb6wh1tbvhgxfs8jcdiuxp9wk8auk52b8cqy2dpq6bgy0zi7dezeql1ha4uby6bicxt9893wulbe0ndjwej8jrfbyxhtledu1d44s7g3bp3gzq2w6qv4jvpym5jxbiqqcqr8l7dxae3wy0xm4gk707ncsryn5me3q4rwrd1aldxoq7ftbyvuc6wmeychuoptvbbme4k5om7p45oew19gjczct3cfpsy48ihis0f7ka5waa0t8ak5wdd06dwu2rr6akj9n82g33fi8h3nhkmoaszv3x1lsghpuy6iumuiiaut07bnwpo93l9d1eluv4e1wk2uhc28p6le5joplh4m3zlbcf2gr6kg82d2pvzu35h66hnlq082x11c6kszhudma2xolmlw44qjiscnqdex6xupllp6mia14eqls5x6u9au3by5ak6zq77jocx24m657ozojwsovbwgdbsn7y216q3ln48v5kq7lr9x79o1j56yfl82mnhgf1bdtk6xgsb17oo4eecrx8x61cns7ckk4a84p7n91cwauawfixslbkg9s8xhmakhay6nn0gdxzowb47wgubm1273bb969fsn0yi00xuvm5sip997ys0pyqygh7xv8vlpqnys298hapaa3n35dtf07y1086c9ignwy1yqv1haeh7i60gtjwruupg4sa13c0u2r7hejtpjvfx622tglcuoi5bzuv6w7nvn2k27298pcxclm1ilbt3581foeg9k2vj4s52uopk2ptktj5551kdwykg5jlyabio7a1ge10qknc01pc4c2862byc4ylwvguduomd68xdncxrenk5pwwrrkj9dea9dszxw5ca1ndzt3ukcjvcav58ins9netbcif58q87usnwca5adqq68o9ejpxcyugpsfwq3h0zty30mfwudkdtoci9xyp82j5dcchrxiksj5vhyudzu76m663roo8q3hcr3sgohwc8vl6wygmhjk0pa705mdy4flt1bf4v4ea2kk866k9g9no8mylcwd57uezxznxyk6dwk7veur5cm6vsnvp23rm6nna9xasw6g50v0ltl4suenvo1nzd8fdi52x6wpv09pqjn56v622feiud60hl8m1mupui7kbpf344m65fgqtfl2dvbcmtj84atq8fhm3707257qj32t5v6ugywc1owo7e28serorb28xqiunxpn4wh7syl75c361xpjj39tprzkip2e5hcl98icyd09798h8jzr6y30nqc22pvvu5s9srz7ra7thlfdk9ox8qfuunqqz133ylykdorg4ta7qb4zzz5lus6lo6pbk3j6tob5ykg8vmbyp9hxpyrsiu9el7zj26wtj2g4gbe36m4qtnvx48zimx2k699t2wive9wozb3xxha04hzd3zyx0rtzoxc98ybb3ljbxltb135fhnx2erdsy0ktmiaks8xq2u3iy6nmgczzr1p5zenjvmful4wpgc3u1u8mlueobnuep6qts2nq8strow9h4qqdoqg81rxpak7eiaxk1xtlgdip2rf212almfenbwtl9uldpktuqosri9ri2nr2p4x8fjg218oo2cr27ugvv52il9y6ufff6n29mwuvloddss49clms703y35ispfepg8o9nxf7fizihx3qyki1hbc1z8945rj37bd6ds0zjywgyw0xit05y == \p\w\6\l\n\d\o\a\0\r\g\z\a\t\6\3\f\g\d\o\6\9\y\g\v\2\s\d\0\q\7\n\x\h\f\5\o\n\q\r\e\b\u\5\o\9\2\0\5\k\f\q\7\5\b\q\s\e\s\y\u\l\d\h\h\t\9\6\i\j\s\u\e\7\0\u\x\u\c\l\0\k\x\j\2\5\u\w\6\v\r\b\w\f\4\k\0\l\q\u\x\9\n\9\z\o\a\v\r\1\i\e\p\f\m\0\z\q\v\7\5\7\n\y\m\a\z\e\u\s\f\0\x\n\2\g\u\x\q\3\d\d\n\y\g\2\o\u\6\b\0\x\p\s\0\9\y\t\a\7\i\d\q\x\k\n\a\9\6\s\7\q\p\a\c\f\f\t\k\2\z\8\t\x\n\5\v\b\q\m\q\u\r\3\5\p\s\i\k\s\c\c\q\8\7\s\p\c\r\f\7\i\1\g\a\g\r\z\v\b\d\d\c\y\v\v\q\x\3\8\j\h\a\t\l\h\p\t\p\t\g\t\8\w\4\2\p\2\9\w\x\6\7\d\u\d\5\2\r\i\w\o\m\q\1\e\3\l\z\a\x\h\h\7\7\8\4\4\3\o\c\3\6\g\x\9\6\d\a\0\d\r\s\i\2\6\2\7\7\2\7\s\c\r\d\q\z\5\n\9\w\3\d\s\n\f\v\2\y\r\k\3\u\a\4\6\v\x\n\j\6\p\i\i\x\x\0\v\5\b\y\l\z\m\e\q\1\g\p\1\x\5\d\g\u\a\g\2\3\w\j\t\5\c\j\g\3\f\d\t\z\p\4\q\4\9\d\a\7\h\5\x\s\2\1\c\4\v\s\k\f\j\y\3\k\i\q\8\1\2\0\j\1\2\2\k\s\x\c\o\3\1\2\5\4\4\0\n\1\3\4\d\c\r\f\4\l\e\z\d\7\6\z\m\5\h\u\2\u\u\1\2\c\g\m\6\3\2\2\0\n\x\z\3\0\9\l\6\5\b\f\s\j\4\h\w\a\1\q\i\x\2\u\f\1\1\2\b\0\2\m\r\i\1\7\6\l\z\g\l\z\s\9\q\4\l\h\1\i\u\t\b\u\g\9\5\3\7\y\n\r\5\g\o\i\f\9\4\6\f\j\x\d\w\j\x\g\j\s\j\6\d\4\f\c\0\b\1\k\k\j\z\b\n\t\n\c\w\z\t\i\6\t\m\1\v\p\k\j\i\y\6\g\3\q\x\a\k\a\6\0\5\w\6\m\k\9\b\o\8\5\k\f\z\v\t\c\h\3\r\v\9\a\o\i\u\m\v\w\k\b\z\h\i\0\h\b\a\o\h\l\u\1\w\5\w\u\y\o\c\p\0\a\p\o\0\u\z\p\g\9\b\c\v\8\w\d\r\i\e\b\6\q\z\t\6\3\p\i\d\s\o\y\z\v\9\r\b\p\9\i\9\d\4\z\c\x\8\i\s\v\n\9\i\0\g\a\y\l\e\l\z\1\a\8\d\0\b\6\w\5\a\x\l\0\l\x\e\b\x\p\l\o\f\c\m\i\j\1\r\4\e\m\i\0\e\r\d\g\h\4\7\p\9\3\r\c\9\e\k\7\5\t\h\4\o\e\z\x\1\7\g\f\r\y\8\3\6\s\n\t\0\z\j\e\d\l\i\3\i\h\j\k\q\1\u\4\z\9\5\r\0\5\8\4\l\b\p\l\v\f\u\4\h\t\j\a\u\x\3\n\o\r\6\d\h\6\y\r\e\t\r\d\o\9\s\y\4\y\f\r\5\f\9\g\c\j\c\q\w\q\9\e\2\z\i\o\9\o\2\a\7\e\e\3\o\3\p\k\9\f\7\4\a\u\1\a\5\v\a\3\e\q\1\g\q\e\j\t\5\3\a\a\3\w\7\m\y\u\p\u\3\7\k\u\e\u\z\7\s\m\s\u\i\e\m\2\l\p\9\5\w\j\3\6\6\u\v\q\f\w\5\1\n\z\8\i\3\d\j\m\r\v\y\1\h\o\9\u\t\f\y\x\9\s\e\8\u\5\f\v\n\l\q\7\3\4\c\4\l\7\b\5\a\y\r\l\v\4\9\u\k\0\o\z\t\r\z\i\f\n\m\i\b\t\s\9\y\e\m\q\g\8\t\a\d\f\b\8\3\t\k\2\t\z\8\4\i\e\l\o\r\q\h\g\l\w\v\0\b\d\l\6\2\w\3\m\l\v\b\x\6\0\0\k\6\y\c\s\c\c\e\a\k\w\z\j\e\6\8\h\g\g\a\b\z\v\c\3\h\k\b\j\8\h\x\9\2\r\k\5\v\7\p\f\b\p\0\l\m\n\l\l\g\f\l\4\k\7\m\x\y\g\x\p\j\e\3\6\a\h\x\n\m\h\6\2\6\n\x\j\u\l\u\r\m\o\f\z\7\o\b\4\2\t\g\q\a\5\o\u\q\a\n\v\5\0\m\m\4\8\j\2\7\v\r\2\x\m\5\w\q\6\l\p\u\l\9\e\h\m\y\i\5\w\i\d\t\q\l\9\2\j\3\n\z\b\4\8\v\a\3\8\f\5\d\n\o\e\z\q\e\8\w\l\3\9\v\y\g\i\s\w\n\o\f\6\b\7\q\n\3\2\a\x\8\4\d\6\k\t\l\8\2\l\n\1\6\h\u\f\t\j\b\p\2\i\g\p\w\0\q\y\a\v\d\y\j\4\g\l\x\c\g\b\4\a\1\i\i\g\t\b\9\j\g\w\2\k\p\w\e\7\5\1\1\j\n\y\2\x\n\4\0\3\s\6\6\k\f\i\i\v\o\a\q\z\g\n\d\2\1\8\x\b\j\u\8\g\e\0\r\k\x\q\m\7\s\c\z\8\q\o\n\1\m\x\v\h\f\6\j\8\i\s\9\n\0\t\z\v\i\l\e\y\m\c\o\0\7\9\a\m\n\u\r\w\8\0\3\w\i\y\i\e\b\j\s\l\h\p\q\f\3\c\q\p\0\c\5\z\h\a\l\j\e\x\q\5\j\h\q\h\6\l\b\m\j\e\m\s\4\t\r\o\p\h\2\9\r\d\e\g\e\s\g\e\m\z\y\l\t\a\q\i\y\k\r\5\g\6\h\2\l\v\6\e\q\q\9\s\r\t\j\c\c\1\b\e\e\j\2\z\8\x\i\s\9\i\k\g\6\0\o\f\n\c\u\a\f\b\o\e\1\4\w\9\i\7\j\l\i\k\p\f\n\6\c\9\l\y\5\s\h\4\i\f\7\0\l\r\z\9\d\g\4\p\7\4\1\m\9\0\u\x\c\i\6\x\u\0\v\2\x\v\8\o\2\e\5\m\p\f\m\5\5\0\y\m\c\7\a\2\p\b\0\r\c\e\v\v\9\3\l\f\f\9\j\e\c\e\l\2\q\k\y\t\f\1\5\x\3\e\8\s\l\k\s\p\r\j\1\l\r\e\o\t\6\r\7\z\y\4\u\d\u\w\a\k\8\f\0\z\9\k\n\9\e\0\9\f\8\t\1\f\e\g\z\4\n\5\u\6\3\w\c\x\0\3\s\v\p\6\j\z\z\h\0\1\c\a\m\2\0\t\m\l\2\z\c\k\w\2\p\t\8\e\r\2\k\6\b\c\q\t\i\z\p\e\l\e\u\9\e\a\8\8\6\y\v\6\w\i\k\s\9\z\8\k\v\6\2\7\p\n\d\x\d\v\4\g\j\o\w\x\0\c\9\9\s\z\q\k\2\v\i\f\z\s\n\i\6\y\h\m\i\o\2\g\a\q\2\t\4\8\1\a\a\a\8\6\6\q\y\k\w\i\7\h\z\1\n\f\b\k\v\j\y\4\0\s\0\6\g\m\8\3\x\k\s\o\t\f\k\7\t\d\p\h\4\d\t\y\a\s\d\r\s\4\0\i\b\u\2\w\i\3\m\7\w\m\6\j\4\e\l\f\a\i\z\8\q\6\t\k\0\4\c\2\a\0\2\q\x\3\4\5\v\0\x\z\8\x\h\l\m\5\7\v\0\x\s\z\6\q\c\h\i\i\g\w\8\2\v\0\i\n\3\4\6\b\2\r\e\0\l\k\1\2\f\j\9\u\n\c\t\x\f\d\w\f\c\6\h\k\7\y\0\6\h\o\f\p\a\l\m\5\f\d\y\0\y\4\w\g\q\s\b\d\d\x\b\k\q\4\9\h\1\0\1\d\r\w\j\7\c\u\5\o\o\t\n\l\1\8\r\q\v\m\b\j\e\d\c\l\k\0\p\6\m\9\o\p\a\c\u\b\y\u\i\5\5\7\8\h\m\7\x\6\9\c\7\o\5\m\u\n\y\l\n\u\d\m\2\d\0\9\b\g\x\x\c\q\3\w\t\5\3\4\q\9\n\7\2\m\5\p\6\m\v\b\d\i\j\m\z\1\d\8\5\6\p\k\2\o\d\o\8\k\q\h\8\5\o\e\i\q\4\b\h\z\o\e\m\c\k\i\z\b\7\j\p\r\y\p\1\l\l\u\c\e\5\e\i\t\4\q\x\e\i\l\s\g\f\8\p\5\c\u\2\s\a\9\1\p\l\v\a\0\5\t\n\b\b\z\x\3\6\1\h\g\j\f\1\4\y\3\i\d\4\7\q\5\t\i\3\1\x\3\p\g\r\8\c\1\e\n\5\k\p\0\t\r\w\q\v\r\1\3\g\p\y\1\v\0\5\2\5\b\z\2\m\6\t\3\c\j\8\y\j\t\6\2\z\a\n\f\c\3\7\6\e\b\a\1\8\w\s\1\3\u\h\o\5\r\v\j\c\3\b\n\g\h\2\1\m\0\g\7\1\i\x\s\q\r\m\x\h\k\q\i\x\7\b\u\2\1\n\v\1\l\j\k\1\0\k\q\8\6\s\q\v\7\o\8\w\2\p\p\z\s\s\m\2\j\u\8\5\2\k\4\k\v\q\i\9\x\v\m\e\n\t\g\6\l\p\j\p\1\h\c\h\9\p\t\r\9\7\k\w\z\v\4\o\v\4\l\7\h\z\a\1\e\y\r\k\p\z\h\6\v\l\g\b\4\x\c\x\2\i\8\w\y\a\w\0\o\o\2\3\z\t\z\n\g\y\c\j\n\v\v\t\m\u\o\u\y\a\d\i\6\3\g\w\j\1\g\n\l\0\v\2\j\t\n\j\e\c\3\w\9\v\2\4\w\h\m\l\h\p\p\l\g\8\n\o\0\s\8\j\b\c\i\b\c\g\z\i\a\6\i\7\1\9\7\y\h\k\c\x\r\3\f\4\g\a\u\g\5\e\r\b\6\w\h\1\t\b\v\h\g\x\f\s\8\j\c\d\i\u\x\p\9\w\k\8\a\u\k\5\2\b\8\c\q\y\2\d\p\q\6\b\g\y\0\z\i\7\d\e\z\e\q\l\1\h\a\4\u\b\y\6\b\i\c\x\t\9\8\9\3\w\u\l\b\e\0\n\d\j\w\e\j\8\j\r\f\b\y\x\h\t\l\e\d\u\1\d\4\4\s\7\g\3\b\p\3\g\z\q\2\w\6\q\v\4\j\v\p\y\m\5\j\x\b\i\q\q\c\q\r\8\l\7\d\x\a\e\3\w\y\0\x\m\4\g\k\7\0\7\n\c\s\r\y\n\5\m\e\3\q\4\r\w\r\d\1\a\l\d\x\o\q\7\f\t\b\y\v\u\c\6\w\m\e\y\c\h\u\o\p\t\v\b\b\m\e\4\k\5\o\m\7\p\4\5\o\e\w\1\9\g\j\c\z\c\t\3\c\f\p\s\y\4\8\i\h\i\s\0\f\7\k\a\5\w\a\a\0\t\8\a\k\5\w\d\d\0\6\d\w\u\2\r\r\6\a\k\j\9\n\8\2\g\3\3\f\i\8\h\3\n\h\k\m\o\a\s\z\v\3\x\1\l\s\g\h\p\u\y\6\i\u\m\u\i\i\a\u\t\0\7\b\n\w\p\o\9\3\l\9\d\1\e\l\u\v\4\e\1\w\k\2\u\h\c\2\8\p\6\l\e\5\j\o\p\l\h\4\m\3\z\l\b\c\f\2\g\r\6\k\g\8\2\d\2\p\v\z\u\3\5\h\6\6\h\n\l\q\0\8\2\x\1\1\c\6\k\s\z\h\u\d\m\a\2\x\o\l\m\l\w\4\4\q\j\i\s\c\n\q\d\e\x\6\x\u\p\l\l\p\6\m\i\a\1\4\e\q\l\s\5\x\6\u\9\a\u\3\b\y\5\a\k\6\z\q\7\7\j\o\c\x\2\4\m\6\5\7\o\z\o\j\w\s\o\v\b\w\g\d\b\s\n\7\y\2\1\6\q\3\l\n\4\8\v\5\k\q\7\l\r\9\x\7\9\o\1\j\5\6\y\f\l\8\2\m\n\h\g\f\1\b\d\t\k\6\x\g\s\b\1\7\o\o\4\e\e\c\r\x\8\x\6\1\c\n\s\7\c\k\k\4\a\8\4\p\7\n\9\1\c\w\a\u\a\w\f\i\x\s\l\b\k\g\9\s\8\x\h\m\a\k\h\a\y\6\n\n\0\g\d\x\z\o\w\b\4\7\w\g\u\b\m\1\2\7\3\b\b\9\6\9\f\s\n\0\y\i\0\0\x\u\v\m\5\s\i\p\9\9\7\y\s\0\p\y\q\y\g\h\7\x\v\8\v\l\p\q\n\y\s\2\9\8\h\a\p\a\a\3\n\3\5\d\t\f\0\7\y\1\0\8\6\c\9\i\g\n\w\y\1\y\q\v\1\h\a\e\h\7\i\6\0\g\t\j\w\r\u\u\p\g\4\s\a\1\3\c\0\u\2\r\7\h\e\j\t\p\j\v\f\x\6\2\2\t\g\l\c\u\o\i\5\b\z\u\v\6\w\7\n\v\n\2\k\2\7\2\9\8\p\c\x\c\l\m\1\i\l\b\t\3\5\8\1\f\o\e\g\9\k\2\v\j\4\s\5\2\u\o\p\k\2\p\t\k\t\j\5\5\5\1\k\d\w\y\k\g\5\j\l\y\a\b\i\o\7\a\1\g\e\1\0\q\k\n\c\0\1\p\c\4\c\2\8\6\2\b\y\c\4\y\l\w\v\g\u\d\u\o\m\d\6\8\x\d\n\c\x\r\e\n\k\5\p\w\w\r\r\k\j\9\d\e\a\9\d\s\z\x\w\5\c\a\1\n\d\z\t\3\u\k\c\j\v\c\a\v\5\8\i\n\s\9\n\e\t\b\c\i\f\5\8\q\8\7\u\s\n\w\c\a\5\a\d\q\q\6\8\o\9\e\j\p\x\c\y\u\g\p\s\f\w\q\3\h\0\z\t\y\3\0\m\f\w\u\d\k\d\t\o\c\i\9\x\y\p\8\2\j\5\d\c\c\h\r\x\i\k\s\j\5\v\h\y\u\d\z\u\7\6\m\6\6\3\r\o\o\8\q\3\h\c\r\3\s\g\o\h\w\c\8\v\l\6\w\y\g\m\h\j\k\0\p\a\7\0\5\m\d\y\4\f\l\t\1\b\f\4\v\4\e\a\2\k\k\8\6\6\k\9\g\9\n\o\8\m\y\l\c\w\d\5\7\u\e\z\x\z\n\x\y\k\6\d\w\k\7\v\e\u\r\5\c\m\6\v\s\n\v\p\2\3\r\m\6\n\n\a\9\x\a\s\w\6\g\5\0\v\0\l\t\l\4\s\u\e\n\v\o\1\n\z\d\8\f\d\i\5\2\x\6\w\p\v\0\9\p\q\j\n\5\6\v\6\2\2\f\e\i\u\d\6\0\h\l\8\m\1\m\u\p\u\i\7\k\b\p\f\3\4\4\m\6\5\f\g\q\t\f\l\2\d\v\b\c\m\t\j\8\4\a\t\q\8\f\h\m\3\7\0\7\2\5\7\q\j\3\2\t\5\v\6\u\g\y\w\c\1\o\w\o\7\e\2\8\s\e\r\o\r\b\2\8\x\q\i\u\n\x\p\n\4\w\h\7\s\y\l\7\5\c\3\6\1\x\p\j\j\3\9\t\p\r\z\k\i\p\2\e\5\h\c\l\9\8\i\c\y\d\0\9\7\9\8\h\8\j\z\r\6\y\3\0\n\q\c\2\2\p\v\v\u\5\s\9\s\r\z\7\r\a\7\t\h\l\f\d\k\9\o\x\8\q\f\u\u\n\q\q\z\1\3\3\y\l\y\k\d\o\r\g\4\t\a\7\q\b\4\z\z\z\5\l\u\s\6\l\o\6\p\b\k\3\j\6\t\o\b\5\y\k\g\8\v\m\b\y\p\9\h\x\p\y\r\s\i\u\9\e\l\7\z\j\2\6\w\t\j\2\g\4\g\b\e\3\6\m\4\q\t\n\v\x\4\8\z\i\m\x\2\k\6\9\9\t\2\w\i\v\e\9\w\o\z\b\3\x\x\h\a\0\4\h\z\d\3\z\y\x\0\r\t\z\o\x\c\9\8\y\b\b\3\l\j\b\x\l\t\b\1\3\5\f\h\n\x\2\e\r\d\s\y\0\k\t\m\i\a\k\s\8\x\q\2\u\3\i\y\6\n\m\g\c\z\z\r\1\p\5\z\e\n\j\v\m\f\u\l\4\w\p\g\c\3\u\1\u\8\m\l\u\e\o\b\n\u\e\p\6\q\t\s\2\n\q\8\s\t\r\o\w\9\h\4\q\q\d\o\q\g\8\1\r\x\p\a\k\7\e\i\a\x\k\1\x\t\l\g\d\i\p\2\r\f\2\1\2\a\l\m\f\e\n\b\w\t\l\9\u\l\d\p\k\t\u\q\o\s\r\i\9\r\i\2\n\r\2\p\4\x\8\f\j\g\2\1\8\o\o\2\c\r\2\7\u\g\v\v\5\2\i\l\9\y\6\u\f\f\f\6\n\2\9\m\w\u\v\l\o\d\d\s\s\4\9\c\l\m\s\7\0\3\y\3\5\i\s\p\f\e\p\g\8\o\9\n\x\f\7\f\i\z\i\h\x\3\q\y\k\i\1\h\b\c\1\z\8\9\4\5\r\j\3\7\b\d\6\d\s\0\z\j\y\w\g\y\w\0\x\i\t\0\5\y ]] 00:08:41.856 ************************************ 00:08:41.856 END TEST dd_rw_offset 00:08:41.856 ************************************ 00:08:41.856 00:08:41.856 real 0m1.314s 00:08:41.856 user 0m0.884s 00:08:41.856 sys 0m0.311s 00:08:41.856 07:18:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:41.856 07:18:03 -- common/autotest_common.sh@10 -- # set +x 00:08:41.856 07:18:03 -- dd/basic_rw.sh@1 -- # cleanup 00:08:41.856 07:18:03 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:41.856 07:18:03 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:41.856 07:18:03 -- dd/common.sh@11 -- # local nvme_ref= 00:08:41.856 07:18:03 -- dd/common.sh@12 -- # local size=0xffff 00:08:41.856 07:18:03 -- dd/common.sh@14 -- # local bs=1048576 00:08:41.856 07:18:03 -- dd/common.sh@15 -- # local count=1 00:08:41.856 07:18:03 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:41.856 07:18:03 -- dd/common.sh@18 -- # gen_conf 00:08:41.856 07:18:03 -- dd/common.sh@31 -- # xtrace_disable 00:08:41.856 07:18:03 -- common/autotest_common.sh@10 -- # set +x 00:08:41.856 [2024-11-28 07:18:03.986380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:41.856 [2024-11-28 07:18:03.986479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70331 ] 00:08:41.856 { 00:08:41.856 "subsystems": [ 00:08:41.856 { 00:08:41.856 "subsystem": "bdev", 00:08:41.856 "config": [ 00:08:41.856 { 00:08:41.856 "params": { 00:08:41.856 "trtype": "pcie", 00:08:41.856 "traddr": "0000:00:06.0", 00:08:41.856 "name": "Nvme0" 00:08:41.856 }, 00:08:41.856 "method": "bdev_nvme_attach_controller" 00:08:41.856 }, 00:08:41.856 { 00:08:41.856 "method": "bdev_wait_for_examine" 00:08:41.856 } 00:08:41.856 ] 00:08:41.856 } 00:08:41.856 ] 00:08:41.856 } 00:08:41.856 [2024-11-28 07:18:04.124605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.114 [2024-11-28 07:18:04.207032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.114  [2024-11-28T07:18:04.673Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:42.398 00:08:42.398 07:18:04 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.398 00:08:42.398 real 0m19.156s 00:08:42.398 user 0m13.564s 00:08:42.398 sys 0m4.111s 00:08:42.398 07:18:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.398 07:18:04 -- common/autotest_common.sh@10 -- # set +x 00:08:42.398 ************************************ 00:08:42.398 END TEST spdk_dd_basic_rw 00:08:42.398 ************************************ 00:08:42.398 07:18:04 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:42.398 07:18:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:42.398 07:18:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.398 07:18:04 -- common/autotest_common.sh@10 -- # set +x 00:08:42.398 ************************************ 00:08:42.398 START TEST spdk_dd_posix 00:08:42.398 ************************************ 00:08:42.398 07:18:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:42.675 * Looking for test storage... 00:08:42.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:42.675 07:18:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:42.675 07:18:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:42.676 07:18:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:42.676 07:18:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:42.676 07:18:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:42.676 07:18:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:42.676 07:18:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:42.676 07:18:04 -- scripts/common.sh@335 -- # IFS=.-: 00:08:42.676 07:18:04 -- scripts/common.sh@335 -- # read -ra ver1 00:08:42.676 07:18:04 -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.676 07:18:04 -- scripts/common.sh@336 -- # read -ra ver2 00:08:42.676 07:18:04 -- scripts/common.sh@337 -- # local 'op=<' 00:08:42.676 07:18:04 -- scripts/common.sh@339 -- # ver1_l=2 00:08:42.676 07:18:04 -- scripts/common.sh@340 -- # ver2_l=1 00:08:42.676 07:18:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:42.676 07:18:04 -- scripts/common.sh@343 -- # case "$op" in 00:08:42.676 07:18:04 -- scripts/common.sh@344 -- # : 1 00:08:42.676 07:18:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:42.676 07:18:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.676 07:18:04 -- scripts/common.sh@364 -- # decimal 1 00:08:42.676 07:18:04 -- scripts/common.sh@352 -- # local d=1 00:08:42.676 07:18:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.676 07:18:04 -- scripts/common.sh@354 -- # echo 1 00:08:42.676 07:18:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:42.676 07:18:04 -- scripts/common.sh@365 -- # decimal 2 00:08:42.676 07:18:04 -- scripts/common.sh@352 -- # local d=2 00:08:42.676 07:18:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.676 07:18:04 -- scripts/common.sh@354 -- # echo 2 00:08:42.676 07:18:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:42.676 07:18:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:42.676 07:18:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:42.676 07:18:04 -- scripts/common.sh@367 -- # return 0 00:08:42.676 07:18:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.676 07:18:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:42.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.676 --rc genhtml_branch_coverage=1 00:08:42.676 --rc genhtml_function_coverage=1 00:08:42.676 --rc genhtml_legend=1 00:08:42.676 --rc geninfo_all_blocks=1 00:08:42.676 --rc geninfo_unexecuted_blocks=1 00:08:42.676 00:08:42.676 ' 00:08:42.676 07:18:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:42.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.676 --rc genhtml_branch_coverage=1 00:08:42.676 --rc genhtml_function_coverage=1 00:08:42.676 --rc genhtml_legend=1 00:08:42.676 --rc geninfo_all_blocks=1 00:08:42.676 --rc geninfo_unexecuted_blocks=1 00:08:42.676 00:08:42.676 ' 00:08:42.676 07:18:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:42.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.676 --rc genhtml_branch_coverage=1 00:08:42.676 --rc genhtml_function_coverage=1 00:08:42.676 --rc genhtml_legend=1 00:08:42.676 --rc geninfo_all_blocks=1 00:08:42.676 --rc geninfo_unexecuted_blocks=1 00:08:42.676 00:08:42.676 ' 00:08:42.676 07:18:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:42.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.676 --rc genhtml_branch_coverage=1 00:08:42.676 --rc genhtml_function_coverage=1 00:08:42.676 --rc genhtml_legend=1 00:08:42.676 --rc geninfo_all_blocks=1 00:08:42.676 --rc geninfo_unexecuted_blocks=1 00:08:42.676 00:08:42.676 ' 00:08:42.676 07:18:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:42.676 07:18:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.676 07:18:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.676 07:18:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.676 07:18:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.676 07:18:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.676 07:18:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.676 07:18:04 -- paths/export.sh@5 -- # export PATH 00:08:42.676 07:18:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.676 07:18:04 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:42.676 07:18:04 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:42.676 07:18:04 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:42.676 07:18:04 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:42.676 07:18:04 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.676 07:18:04 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:42.676 07:18:04 -- dd/posix.sh@130 -- # tests 00:08:42.676 07:18:04 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:42.676 * First test run, liburing in use 00:08:42.676 07:18:04 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:42.676 07:18:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:42.676 07:18:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.676 07:18:04 -- common/autotest_common.sh@10 -- # set +x 00:08:42.676 ************************************ 00:08:42.676 START TEST dd_flag_append 00:08:42.676 ************************************ 00:08:42.676 07:18:04 -- common/autotest_common.sh@1114 -- # append 00:08:42.676 07:18:04 -- dd/posix.sh@16 -- # local dump0 00:08:42.676 07:18:04 -- dd/posix.sh@17 -- # local dump1 00:08:42.676 07:18:04 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:42.676 07:18:04 -- dd/common.sh@98 -- # xtrace_disable 00:08:42.676 07:18:04 -- common/autotest_common.sh@10 -- # set +x 00:08:42.676 07:18:04 -- dd/posix.sh@19 -- # dump0=m4ksk6xcbm33nx14131z8yzbr1fnlrus 00:08:42.676 07:18:04 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:42.676 07:18:04 -- dd/common.sh@98 -- # xtrace_disable 00:08:42.676 07:18:04 -- common/autotest_common.sh@10 -- # set +x 00:08:42.676 07:18:04 -- dd/posix.sh@20 -- # dump1=vyh841jys32kpcnr246s9nbb4dljvcwd 00:08:42.676 07:18:04 -- dd/posix.sh@22 -- # printf %s m4ksk6xcbm33nx14131z8yzbr1fnlrus 00:08:42.676 07:18:04 -- dd/posix.sh@23 -- # printf %s vyh841jys32kpcnr246s9nbb4dljvcwd 00:08:42.676 07:18:04 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:42.676 [2024-11-28 07:18:04.854835] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:42.676 [2024-11-28 07:18:04.854925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70397 ] 00:08:42.934 [2024-11-28 07:18:04.986225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.934 [2024-11-28 07:18:05.067951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.934  [2024-11-28T07:18:05.468Z] Copying: 32/32 [B] (average 31 kBps) 00:08:43.193 00:08:43.193 07:18:05 -- dd/posix.sh@27 -- # [[ vyh841jys32kpcnr246s9nbb4dljvcwdm4ksk6xcbm33nx14131z8yzbr1fnlrus == \v\y\h\8\4\1\j\y\s\3\2\k\p\c\n\r\2\4\6\s\9\n\b\b\4\d\l\j\v\c\w\d\m\4\k\s\k\6\x\c\b\m\3\3\n\x\1\4\1\3\1\z\8\y\z\b\r\1\f\n\l\r\u\s ]] 00:08:43.193 00:08:43.193 real 0m0.546s 00:08:43.193 user 0m0.291s 00:08:43.193 sys 0m0.135s 00:08:43.193 07:18:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.193 ************************************ 00:08:43.193 END TEST dd_flag_append 00:08:43.193 07:18:05 -- common/autotest_common.sh@10 -- # set +x 00:08:43.193 ************************************ 00:08:43.193 07:18:05 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:43.193 07:18:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:43.193 07:18:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.193 07:18:05 -- common/autotest_common.sh@10 -- # set +x 00:08:43.193 ************************************ 00:08:43.193 START TEST dd_flag_directory 00:08:43.193 ************************************ 00:08:43.193 07:18:05 -- common/autotest_common.sh@1114 -- # directory 00:08:43.193 07:18:05 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.193 07:18:05 -- common/autotest_common.sh@650 -- # local es=0 00:08:43.193 07:18:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.193 07:18:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.193 07:18:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.193 07:18:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.193 07:18:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.193 07:18:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.193 07:18:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.193 07:18:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.193 07:18:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.193 07:18:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.193 [2024-11-28 07:18:05.449202] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.193 [2024-11-28 07:18:05.449295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70423 ] 00:08:43.452 [2024-11-28 07:18:05.581486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.452 [2024-11-28 07:18:05.669689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.711 [2024-11-28 07:18:05.751809] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:43.711 [2024-11-28 07:18:05.751871] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:43.711 [2024-11-28 07:18:05.751901] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.711 [2024-11-28 07:18:05.859414] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:43.711 07:18:05 -- common/autotest_common.sh@653 -- # es=236 00:08:43.711 07:18:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.711 07:18:05 -- common/autotest_common.sh@662 -- # es=108 00:08:43.711 07:18:05 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:43.711 07:18:05 -- common/autotest_common.sh@670 -- # es=1 00:08:43.711 07:18:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.711 07:18:05 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:43.711 07:18:05 -- common/autotest_common.sh@650 -- # local es=0 00:08:43.711 07:18:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:43.711 07:18:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.711 07:18:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.711 07:18:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.711 07:18:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.711 07:18:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.711 07:18:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.711 07:18:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.711 07:18:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.711 07:18:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:43.711 [2024-11-28 07:18:05.983499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.711 [2024-11-28 07:18:05.983596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70433 ] 00:08:43.969 [2024-11-28 07:18:06.120626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.969 [2024-11-28 07:18:06.205248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.228 [2024-11-28 07:18:06.285475] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:44.229 [2024-11-28 07:18:06.285530] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:44.229 [2024-11-28 07:18:06.285544] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.229 [2024-11-28 07:18:06.391473] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:44.229 07:18:06 -- common/autotest_common.sh@653 -- # es=236 00:08:44.229 07:18:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.229 07:18:06 -- common/autotest_common.sh@662 -- # es=108 00:08:44.229 07:18:06 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:44.229 07:18:06 -- common/autotest_common.sh@670 -- # es=1 00:08:44.229 07:18:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.229 00:08:44.229 real 0m1.070s 00:08:44.229 user 0m0.602s 00:08:44.229 sys 0m0.260s 00:08:44.229 07:18:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:44.229 ************************************ 00:08:44.229 END TEST dd_flag_directory 00:08:44.229 ************************************ 00:08:44.229 07:18:06 -- common/autotest_common.sh@10 -- # set +x 00:08:44.488 07:18:06 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:44.488 07:18:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:44.488 07:18:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:44.488 07:18:06 -- common/autotest_common.sh@10 -- # set +x 00:08:44.488 ************************************ 00:08:44.488 START TEST dd_flag_nofollow 00:08:44.488 ************************************ 00:08:44.488 07:18:06 -- common/autotest_common.sh@1114 -- # nofollow 00:08:44.488 07:18:06 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:44.488 07:18:06 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:44.488 07:18:06 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:44.488 07:18:06 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:44.488 07:18:06 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.488 07:18:06 -- common/autotest_common.sh@650 -- # local es=0 00:08:44.488 07:18:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.488 07:18:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.488 07:18:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.488 07:18:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.488 07:18:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.488 07:18:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.488 07:18:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.488 07:18:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.488 07:18:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:44.488 07:18:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.488 [2024-11-28 07:18:06.587954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:44.488 [2024-11-28 07:18:06.588077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70467 ] 00:08:44.488 [2024-11-28 07:18:06.729363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.748 [2024-11-28 07:18:06.813492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.748 [2024-11-28 07:18:06.893734] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:44.748 [2024-11-28 07:18:06.893805] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:44.748 [2024-11-28 07:18:06.893834] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.748 [2024-11-28 07:18:06.999229] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:45.008 07:18:07 -- common/autotest_common.sh@653 -- # es=216 00:08:45.008 07:18:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:45.008 07:18:07 -- common/autotest_common.sh@662 -- # es=88 00:08:45.008 07:18:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:45.008 07:18:07 -- common/autotest_common.sh@670 -- # es=1 00:08:45.008 07:18:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:45.008 07:18:07 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:45.008 07:18:07 -- common/autotest_common.sh@650 -- # local es=0 00:08:45.008 07:18:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:45.008 07:18:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.008 07:18:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.008 07:18:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.008 07:18:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.008 07:18:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.008 07:18:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.008 07:18:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.008 07:18:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:45.008 07:18:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:45.008 [2024-11-28 07:18:07.142621] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.008 [2024-11-28 07:18:07.142744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70475 ] 00:08:45.008 [2024-11-28 07:18:07.281419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.266 [2024-11-28 07:18:07.352474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.266 [2024-11-28 07:18:07.435097] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:45.266 [2024-11-28 07:18:07.435168] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:45.266 [2024-11-28 07:18:07.435199] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:45.524 [2024-11-28 07:18:07.543682] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:45.524 07:18:07 -- common/autotest_common.sh@653 -- # es=216 00:08:45.524 07:18:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:45.524 07:18:07 -- common/autotest_common.sh@662 -- # es=88 00:08:45.524 07:18:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:45.524 07:18:07 -- common/autotest_common.sh@670 -- # es=1 00:08:45.524 07:18:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:45.524 07:18:07 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:45.524 07:18:07 -- dd/common.sh@98 -- # xtrace_disable 00:08:45.524 07:18:07 -- common/autotest_common.sh@10 -- # set +x 00:08:45.524 07:18:07 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:45.524 [2024-11-28 07:18:07.694274] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.525 [2024-11-28 07:18:07.694420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70484 ] 00:08:45.783 [2024-11-28 07:18:07.835438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.783 [2024-11-28 07:18:07.926137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.783  [2024-11-28T07:18:08.317Z] Copying: 512/512 [B] (average 500 kBps) 00:08:46.042 00:08:46.042 07:18:08 -- dd/posix.sh@49 -- # [[ b79khjf5dbwaubs46kyuol9atn81pi8hflj1e4cpqm1kcact7c784yj8pq0mjkj3lk6gnifhmt9inrvfapt6cqh79cdim8xf4h8j38siy910e4k38afijqagpku287rqufndwgy7z73t5pj6zmjoe0yx8xp8nkze2ixzd9t76qzc1w1t6o3g5lk0mkk3n2ltvmwy1c1nf95lxgtop0y1wlg3ldbb07dc05ssfrxwb6t007lled5k9vqr0iqmajdjsj8vrn250743jnxkpjhdvjvljscu4dl96s1yc1ka7tfjp5tm2mp80m60945f7z7ilx91fnpngfhswhl3c7p7lx1o1xrwf3zzoqr7sflen79f0put920rtm5ef03m02nij8z07z2dew1swgtr36a8vdsxg56f9csl9lb3m4djmljjdcl53tk0im2wblj5lhn7qa0qt6ptuw7wmmuom0jvsxorfka719wlvbjj9phqs2wsut2auns8m5660kljngph == \b\7\9\k\h\j\f\5\d\b\w\a\u\b\s\4\6\k\y\u\o\l\9\a\t\n\8\1\p\i\8\h\f\l\j\1\e\4\c\p\q\m\1\k\c\a\c\t\7\c\7\8\4\y\j\8\p\q\0\m\j\k\j\3\l\k\6\g\n\i\f\h\m\t\9\i\n\r\v\f\a\p\t\6\c\q\h\7\9\c\d\i\m\8\x\f\4\h\8\j\3\8\s\i\y\9\1\0\e\4\k\3\8\a\f\i\j\q\a\g\p\k\u\2\8\7\r\q\u\f\n\d\w\g\y\7\z\7\3\t\5\p\j\6\z\m\j\o\e\0\y\x\8\x\p\8\n\k\z\e\2\i\x\z\d\9\t\7\6\q\z\c\1\w\1\t\6\o\3\g\5\l\k\0\m\k\k\3\n\2\l\t\v\m\w\y\1\c\1\n\f\9\5\l\x\g\t\o\p\0\y\1\w\l\g\3\l\d\b\b\0\7\d\c\0\5\s\s\f\r\x\w\b\6\t\0\0\7\l\l\e\d\5\k\9\v\q\r\0\i\q\m\a\j\d\j\s\j\8\v\r\n\2\5\0\7\4\3\j\n\x\k\p\j\h\d\v\j\v\l\j\s\c\u\4\d\l\9\6\s\1\y\c\1\k\a\7\t\f\j\p\5\t\m\2\m\p\8\0\m\6\0\9\4\5\f\7\z\7\i\l\x\9\1\f\n\p\n\g\f\h\s\w\h\l\3\c\7\p\7\l\x\1\o\1\x\r\w\f\3\z\z\o\q\r\7\s\f\l\e\n\7\9\f\0\p\u\t\9\2\0\r\t\m\5\e\f\0\3\m\0\2\n\i\j\8\z\0\7\z\2\d\e\w\1\s\w\g\t\r\3\6\a\8\v\d\s\x\g\5\6\f\9\c\s\l\9\l\b\3\m\4\d\j\m\l\j\j\d\c\l\5\3\t\k\0\i\m\2\w\b\l\j\5\l\h\n\7\q\a\0\q\t\6\p\t\u\w\7\w\m\m\u\o\m\0\j\v\s\x\o\r\f\k\a\7\1\9\w\l\v\b\j\j\9\p\h\q\s\2\w\s\u\t\2\a\u\n\s\8\m\5\6\6\0\k\l\j\n\g\p\h ]] 00:08:46.042 00:08:46.042 real 0m1.688s 00:08:46.042 user 0m0.927s 00:08:46.042 sys 0m0.429s 00:08:46.042 ************************************ 00:08:46.042 07:18:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.042 07:18:08 -- common/autotest_common.sh@10 -- # set +x 00:08:46.042 END TEST dd_flag_nofollow 00:08:46.042 ************************************ 00:08:46.042 07:18:08 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:46.042 07:18:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:46.042 07:18:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.042 07:18:08 -- common/autotest_common.sh@10 -- # set +x 00:08:46.042 ************************************ 00:08:46.042 START TEST dd_flag_noatime 00:08:46.042 ************************************ 00:08:46.042 07:18:08 -- common/autotest_common.sh@1114 -- # noatime 00:08:46.042 07:18:08 -- dd/posix.sh@53 -- # local atime_if 00:08:46.042 07:18:08 -- dd/posix.sh@54 -- # local atime_of 00:08:46.042 07:18:08 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:46.042 07:18:08 -- dd/common.sh@98 -- # xtrace_disable 00:08:46.042 07:18:08 -- common/autotest_common.sh@10 -- # set +x 00:08:46.042 07:18:08 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:46.042 07:18:08 -- dd/posix.sh@60 -- # atime_if=1732778288 00:08:46.042 07:18:08 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:46.042 07:18:08 -- dd/posix.sh@61 -- # atime_of=1732778288 00:08:46.042 07:18:08 -- dd/posix.sh@66 -- # sleep 1 00:08:47.420 07:18:09 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:47.420 [2024-11-28 07:18:09.331333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:47.420 [2024-11-28 07:18:09.331486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70525 ] 00:08:47.420 [2024-11-28 07:18:09.467330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.420 [2024-11-28 07:18:09.559442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.420  [2024-11-28T07:18:09.955Z] Copying: 512/512 [B] (average 500 kBps) 00:08:47.680 00:08:47.680 07:18:09 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:47.680 07:18:09 -- dd/posix.sh@69 -- # (( atime_if == 1732778288 )) 00:08:47.680 07:18:09 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:47.680 07:18:09 -- dd/posix.sh@70 -- # (( atime_of == 1732778288 )) 00:08:47.680 07:18:09 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:47.680 [2024-11-28 07:18:09.914582] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:47.680 [2024-11-28 07:18:09.914699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70537 ] 00:08:47.939 [2024-11-28 07:18:10.053164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.939 [2024-11-28 07:18:10.137455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.198  [2024-11-28T07:18:10.473Z] Copying: 512/512 [B] (average 500 kBps) 00:08:48.198 00:08:48.198 07:18:10 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.198 07:18:10 -- dd/posix.sh@73 -- # (( atime_if < 1732778290 )) 00:08:48.198 00:08:48.198 real 0m2.177s 00:08:48.198 user 0m0.624s 00:08:48.198 sys 0m0.313s 00:08:48.198 07:18:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:48.198 07:18:10 -- common/autotest_common.sh@10 -- # set +x 00:08:48.198 ************************************ 00:08:48.198 END TEST dd_flag_noatime 00:08:48.198 ************************************ 00:08:48.458 07:18:10 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:48.458 07:18:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.458 07:18:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.458 07:18:10 -- common/autotest_common.sh@10 -- # set +x 00:08:48.458 ************************************ 00:08:48.458 START TEST dd_flags_misc 00:08:48.458 ************************************ 00:08:48.458 07:18:10 -- common/autotest_common.sh@1114 -- # io 00:08:48.458 07:18:10 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:48.458 07:18:10 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:48.458 07:18:10 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:48.458 07:18:10 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:48.458 07:18:10 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:48.458 07:18:10 -- dd/common.sh@98 -- # xtrace_disable 00:08:48.458 07:18:10 -- common/autotest_common.sh@10 -- # set +x 00:08:48.458 07:18:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:48.458 07:18:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:48.458 [2024-11-28 07:18:10.550055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.458 [2024-11-28 07:18:10.550195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70563 ] 00:08:48.458 [2024-11-28 07:18:10.690337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.715 [2024-11-28 07:18:10.770343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.715  [2024-11-28T07:18:11.248Z] Copying: 512/512 [B] (average 500 kBps) 00:08:48.973 00:08:48.973 07:18:11 -- dd/posix.sh@93 -- # [[ xs84t3t0oisdcokzcc2kvsj36v88h257uq9zn5ica7qhqsiwmad9mykk7yudgeldf6wjttzfv2xpzxcmu9rwgxstj2pg4nzxeh2yl23b570buprtm7m9guk16b7qwf6dv14ouyzdavf49r7numj80blkf9rzjoiilelalvrtqjsy61q74iq6wyjkw3utuohwzfcb7goct20xzt7s553vbavq7ix026o4igjh29ang14y4ijx73drgtvm782rx3l2gmn8pltpgnqkji35rrtolqkckoi3xcdext6a5lqlliqi6gsi4m8ho5ckaonduy1aqv3249zxgv4iw8zpodigkre1jjcbpcr4k8bgzh8xc80p428j03u5yzhve6l5tpeozi08p1cr3lbdfws09teb1wivt6xqzovopmmks0igzdqxqvku0bnnczthvus0myhr9lgtc5yzhrfmu7ev8tkg9i2krsmc9bts3n4pjo4wakcaab0w2c4vl9f5as08i4dn == \x\s\8\4\t\3\t\0\o\i\s\d\c\o\k\z\c\c\2\k\v\s\j\3\6\v\8\8\h\2\5\7\u\q\9\z\n\5\i\c\a\7\q\h\q\s\i\w\m\a\d\9\m\y\k\k\7\y\u\d\g\e\l\d\f\6\w\j\t\t\z\f\v\2\x\p\z\x\c\m\u\9\r\w\g\x\s\t\j\2\p\g\4\n\z\x\e\h\2\y\l\2\3\b\5\7\0\b\u\p\r\t\m\7\m\9\g\u\k\1\6\b\7\q\w\f\6\d\v\1\4\o\u\y\z\d\a\v\f\4\9\r\7\n\u\m\j\8\0\b\l\k\f\9\r\z\j\o\i\i\l\e\l\a\l\v\r\t\q\j\s\y\6\1\q\7\4\i\q\6\w\y\j\k\w\3\u\t\u\o\h\w\z\f\c\b\7\g\o\c\t\2\0\x\z\t\7\s\5\5\3\v\b\a\v\q\7\i\x\0\2\6\o\4\i\g\j\h\2\9\a\n\g\1\4\y\4\i\j\x\7\3\d\r\g\t\v\m\7\8\2\r\x\3\l\2\g\m\n\8\p\l\t\p\g\n\q\k\j\i\3\5\r\r\t\o\l\q\k\c\k\o\i\3\x\c\d\e\x\t\6\a\5\l\q\l\l\i\q\i\6\g\s\i\4\m\8\h\o\5\c\k\a\o\n\d\u\y\1\a\q\v\3\2\4\9\z\x\g\v\4\i\w\8\z\p\o\d\i\g\k\r\e\1\j\j\c\b\p\c\r\4\k\8\b\g\z\h\8\x\c\8\0\p\4\2\8\j\0\3\u\5\y\z\h\v\e\6\l\5\t\p\e\o\z\i\0\8\p\1\c\r\3\l\b\d\f\w\s\0\9\t\e\b\1\w\i\v\t\6\x\q\z\o\v\o\p\m\m\k\s\0\i\g\z\d\q\x\q\v\k\u\0\b\n\n\c\z\t\h\v\u\s\0\m\y\h\r\9\l\g\t\c\5\y\z\h\r\f\m\u\7\e\v\8\t\k\g\9\i\2\k\r\s\m\c\9\b\t\s\3\n\4\p\j\o\4\w\a\k\c\a\a\b\0\w\2\c\4\v\l\9\f\5\a\s\0\8\i\4\d\n ]] 00:08:48.973 07:18:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:48.973 07:18:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:48.973 [2024-11-28 07:18:11.104265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:48.973 [2024-11-28 07:18:11.104393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70571 ] 00:08:48.973 [2024-11-28 07:18:11.236839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.231 [2024-11-28 07:18:11.323544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.231  [2024-11-28T07:18:11.765Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.490 00:08:49.490 07:18:11 -- dd/posix.sh@93 -- # [[ xs84t3t0oisdcokzcc2kvsj36v88h257uq9zn5ica7qhqsiwmad9mykk7yudgeldf6wjttzfv2xpzxcmu9rwgxstj2pg4nzxeh2yl23b570buprtm7m9guk16b7qwf6dv14ouyzdavf49r7numj80blkf9rzjoiilelalvrtqjsy61q74iq6wyjkw3utuohwzfcb7goct20xzt7s553vbavq7ix026o4igjh29ang14y4ijx73drgtvm782rx3l2gmn8pltpgnqkji35rrtolqkckoi3xcdext6a5lqlliqi6gsi4m8ho5ckaonduy1aqv3249zxgv4iw8zpodigkre1jjcbpcr4k8bgzh8xc80p428j03u5yzhve6l5tpeozi08p1cr3lbdfws09teb1wivt6xqzovopmmks0igzdqxqvku0bnnczthvus0myhr9lgtc5yzhrfmu7ev8tkg9i2krsmc9bts3n4pjo4wakcaab0w2c4vl9f5as08i4dn == \x\s\8\4\t\3\t\0\o\i\s\d\c\o\k\z\c\c\2\k\v\s\j\3\6\v\8\8\h\2\5\7\u\q\9\z\n\5\i\c\a\7\q\h\q\s\i\w\m\a\d\9\m\y\k\k\7\y\u\d\g\e\l\d\f\6\w\j\t\t\z\f\v\2\x\p\z\x\c\m\u\9\r\w\g\x\s\t\j\2\p\g\4\n\z\x\e\h\2\y\l\2\3\b\5\7\0\b\u\p\r\t\m\7\m\9\g\u\k\1\6\b\7\q\w\f\6\d\v\1\4\o\u\y\z\d\a\v\f\4\9\r\7\n\u\m\j\8\0\b\l\k\f\9\r\z\j\o\i\i\l\e\l\a\l\v\r\t\q\j\s\y\6\1\q\7\4\i\q\6\w\y\j\k\w\3\u\t\u\o\h\w\z\f\c\b\7\g\o\c\t\2\0\x\z\t\7\s\5\5\3\v\b\a\v\q\7\i\x\0\2\6\o\4\i\g\j\h\2\9\a\n\g\1\4\y\4\i\j\x\7\3\d\r\g\t\v\m\7\8\2\r\x\3\l\2\g\m\n\8\p\l\t\p\g\n\q\k\j\i\3\5\r\r\t\o\l\q\k\c\k\o\i\3\x\c\d\e\x\t\6\a\5\l\q\l\l\i\q\i\6\g\s\i\4\m\8\h\o\5\c\k\a\o\n\d\u\y\1\a\q\v\3\2\4\9\z\x\g\v\4\i\w\8\z\p\o\d\i\g\k\r\e\1\j\j\c\b\p\c\r\4\k\8\b\g\z\h\8\x\c\8\0\p\4\2\8\j\0\3\u\5\y\z\h\v\e\6\l\5\t\p\e\o\z\i\0\8\p\1\c\r\3\l\b\d\f\w\s\0\9\t\e\b\1\w\i\v\t\6\x\q\z\o\v\o\p\m\m\k\s\0\i\g\z\d\q\x\q\v\k\u\0\b\n\n\c\z\t\h\v\u\s\0\m\y\h\r\9\l\g\t\c\5\y\z\h\r\f\m\u\7\e\v\8\t\k\g\9\i\2\k\r\s\m\c\9\b\t\s\3\n\4\p\j\o\4\w\a\k\c\a\a\b\0\w\2\c\4\v\l\9\f\5\a\s\0\8\i\4\d\n ]] 00:08:49.490 07:18:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.490 07:18:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:49.490 [2024-11-28 07:18:11.669157] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:49.490 [2024-11-28 07:18:11.669270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70584 ] 00:08:49.748 [2024-11-28 07:18:11.805095] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.748 [2024-11-28 07:18:11.888969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.748  [2024-11-28T07:18:12.283Z] Copying: 512/512 [B] (average 125 kBps) 00:08:50.008 00:08:50.008 07:18:12 -- dd/posix.sh@93 -- # [[ xs84t3t0oisdcokzcc2kvsj36v88h257uq9zn5ica7qhqsiwmad9mykk7yudgeldf6wjttzfv2xpzxcmu9rwgxstj2pg4nzxeh2yl23b570buprtm7m9guk16b7qwf6dv14ouyzdavf49r7numj80blkf9rzjoiilelalvrtqjsy61q74iq6wyjkw3utuohwzfcb7goct20xzt7s553vbavq7ix026o4igjh29ang14y4ijx73drgtvm782rx3l2gmn8pltpgnqkji35rrtolqkckoi3xcdext6a5lqlliqi6gsi4m8ho5ckaonduy1aqv3249zxgv4iw8zpodigkre1jjcbpcr4k8bgzh8xc80p428j03u5yzhve6l5tpeozi08p1cr3lbdfws09teb1wivt6xqzovopmmks0igzdqxqvku0bnnczthvus0myhr9lgtc5yzhrfmu7ev8tkg9i2krsmc9bts3n4pjo4wakcaab0w2c4vl9f5as08i4dn == \x\s\8\4\t\3\t\0\o\i\s\d\c\o\k\z\c\c\2\k\v\s\j\3\6\v\8\8\h\2\5\7\u\q\9\z\n\5\i\c\a\7\q\h\q\s\i\w\m\a\d\9\m\y\k\k\7\y\u\d\g\e\l\d\f\6\w\j\t\t\z\f\v\2\x\p\z\x\c\m\u\9\r\w\g\x\s\t\j\2\p\g\4\n\z\x\e\h\2\y\l\2\3\b\5\7\0\b\u\p\r\t\m\7\m\9\g\u\k\1\6\b\7\q\w\f\6\d\v\1\4\o\u\y\z\d\a\v\f\4\9\r\7\n\u\m\j\8\0\b\l\k\f\9\r\z\j\o\i\i\l\e\l\a\l\v\r\t\q\j\s\y\6\1\q\7\4\i\q\6\w\y\j\k\w\3\u\t\u\o\h\w\z\f\c\b\7\g\o\c\t\2\0\x\z\t\7\s\5\5\3\v\b\a\v\q\7\i\x\0\2\6\o\4\i\g\j\h\2\9\a\n\g\1\4\y\4\i\j\x\7\3\d\r\g\t\v\m\7\8\2\r\x\3\l\2\g\m\n\8\p\l\t\p\g\n\q\k\j\i\3\5\r\r\t\o\l\q\k\c\k\o\i\3\x\c\d\e\x\t\6\a\5\l\q\l\l\i\q\i\6\g\s\i\4\m\8\h\o\5\c\k\a\o\n\d\u\y\1\a\q\v\3\2\4\9\z\x\g\v\4\i\w\8\z\p\o\d\i\g\k\r\e\1\j\j\c\b\p\c\r\4\k\8\b\g\z\h\8\x\c\8\0\p\4\2\8\j\0\3\u\5\y\z\h\v\e\6\l\5\t\p\e\o\z\i\0\8\p\1\c\r\3\l\b\d\f\w\s\0\9\t\e\b\1\w\i\v\t\6\x\q\z\o\v\o\p\m\m\k\s\0\i\g\z\d\q\x\q\v\k\u\0\b\n\n\c\z\t\h\v\u\s\0\m\y\h\r\9\l\g\t\c\5\y\z\h\r\f\m\u\7\e\v\8\t\k\g\9\i\2\k\r\s\m\c\9\b\t\s\3\n\4\p\j\o\4\w\a\k\c\a\a\b\0\w\2\c\4\v\l\9\f\5\a\s\0\8\i\4\d\n ]] 00:08:50.008 07:18:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:50.008 07:18:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:50.008 [2024-11-28 07:18:12.234398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:50.008 [2024-11-28 07:18:12.234522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70586 ] 00:08:50.268 [2024-11-28 07:18:12.366224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.268 [2024-11-28 07:18:12.456525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.268  [2024-11-28T07:18:12.803Z] Copying: 512/512 [B] (average 500 kBps) 00:08:50.528 00:08:50.528 07:18:12 -- dd/posix.sh@93 -- # [[ xs84t3t0oisdcokzcc2kvsj36v88h257uq9zn5ica7qhqsiwmad9mykk7yudgeldf6wjttzfv2xpzxcmu9rwgxstj2pg4nzxeh2yl23b570buprtm7m9guk16b7qwf6dv14ouyzdavf49r7numj80blkf9rzjoiilelalvrtqjsy61q74iq6wyjkw3utuohwzfcb7goct20xzt7s553vbavq7ix026o4igjh29ang14y4ijx73drgtvm782rx3l2gmn8pltpgnqkji35rrtolqkckoi3xcdext6a5lqlliqi6gsi4m8ho5ckaonduy1aqv3249zxgv4iw8zpodigkre1jjcbpcr4k8bgzh8xc80p428j03u5yzhve6l5tpeozi08p1cr3lbdfws09teb1wivt6xqzovopmmks0igzdqxqvku0bnnczthvus0myhr9lgtc5yzhrfmu7ev8tkg9i2krsmc9bts3n4pjo4wakcaab0w2c4vl9f5as08i4dn == \x\s\8\4\t\3\t\0\o\i\s\d\c\o\k\z\c\c\2\k\v\s\j\3\6\v\8\8\h\2\5\7\u\q\9\z\n\5\i\c\a\7\q\h\q\s\i\w\m\a\d\9\m\y\k\k\7\y\u\d\g\e\l\d\f\6\w\j\t\t\z\f\v\2\x\p\z\x\c\m\u\9\r\w\g\x\s\t\j\2\p\g\4\n\z\x\e\h\2\y\l\2\3\b\5\7\0\b\u\p\r\t\m\7\m\9\g\u\k\1\6\b\7\q\w\f\6\d\v\1\4\o\u\y\z\d\a\v\f\4\9\r\7\n\u\m\j\8\0\b\l\k\f\9\r\z\j\o\i\i\l\e\l\a\l\v\r\t\q\j\s\y\6\1\q\7\4\i\q\6\w\y\j\k\w\3\u\t\u\o\h\w\z\f\c\b\7\g\o\c\t\2\0\x\z\t\7\s\5\5\3\v\b\a\v\q\7\i\x\0\2\6\o\4\i\g\j\h\2\9\a\n\g\1\4\y\4\i\j\x\7\3\d\r\g\t\v\m\7\8\2\r\x\3\l\2\g\m\n\8\p\l\t\p\g\n\q\k\j\i\3\5\r\r\t\o\l\q\k\c\k\o\i\3\x\c\d\e\x\t\6\a\5\l\q\l\l\i\q\i\6\g\s\i\4\m\8\h\o\5\c\k\a\o\n\d\u\y\1\a\q\v\3\2\4\9\z\x\g\v\4\i\w\8\z\p\o\d\i\g\k\r\e\1\j\j\c\b\p\c\r\4\k\8\b\g\z\h\8\x\c\8\0\p\4\2\8\j\0\3\u\5\y\z\h\v\e\6\l\5\t\p\e\o\z\i\0\8\p\1\c\r\3\l\b\d\f\w\s\0\9\t\e\b\1\w\i\v\t\6\x\q\z\o\v\o\p\m\m\k\s\0\i\g\z\d\q\x\q\v\k\u\0\b\n\n\c\z\t\h\v\u\s\0\m\y\h\r\9\l\g\t\c\5\y\z\h\r\f\m\u\7\e\v\8\t\k\g\9\i\2\k\r\s\m\c\9\b\t\s\3\n\4\p\j\o\4\w\a\k\c\a\a\b\0\w\2\c\4\v\l\9\f\5\a\s\0\8\i\4\d\n ]] 00:08:50.528 07:18:12 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:50.528 07:18:12 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:50.528 07:18:12 -- dd/common.sh@98 -- # xtrace_disable 00:08:50.528 07:18:12 -- common/autotest_common.sh@10 -- # set +x 00:08:50.528 07:18:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:50.528 07:18:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:50.787 [2024-11-28 07:18:12.815133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:50.787 [2024-11-28 07:18:12.815265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70599 ] 00:08:50.787 [2024-11-28 07:18:12.954448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.787 [2024-11-28 07:18:13.027901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.047  [2024-11-28T07:18:13.322Z] Copying: 512/512 [B] (average 500 kBps) 00:08:51.047 00:08:51.047 07:18:13 -- dd/posix.sh@93 -- # [[ t4jo7v6im641nba6zfm6zd7humhvb9sl2uwathxc5w13t1zholoxuki7s6pqs28jvysjpjw7pwg2su4mfk8tdmu1iljdvdgvtga4xmxn2hmlwl9opsgp8xnbtjgooirhdnhu1n1oxr3vbyp7z777gr4dj9gv1x9yz2vko6za6p8l97hfdn6yu3vywgk686mo8y3lekguen9td20c5pgyw25n1s5fpfrjjeqp9x58qknunfiu02kz3hq2lty3k15s0zpmq4jryd2fvbajaq9szfl31o401xw964x4o9jix6npjs1d70ocmbntvkeqmqsam2sguqu5b039kgp0gcagkzhme81oatutx3zrxz0n2t8siv0vrwkvce1hp5w5nyc6hciex5hkefs8p94qs4oi4fuj2vd9bwuwws7ss3c1u4fahlp68lb6yhlrqevl9pt93u4gv9l3hmqus97685kzfuyfco6uc1zeq14jsesmh139308lx6j0lelzzn1yjysb == \t\4\j\o\7\v\6\i\m\6\4\1\n\b\a\6\z\f\m\6\z\d\7\h\u\m\h\v\b\9\s\l\2\u\w\a\t\h\x\c\5\w\1\3\t\1\z\h\o\l\o\x\u\k\i\7\s\6\p\q\s\2\8\j\v\y\s\j\p\j\w\7\p\w\g\2\s\u\4\m\f\k\8\t\d\m\u\1\i\l\j\d\v\d\g\v\t\g\a\4\x\m\x\n\2\h\m\l\w\l\9\o\p\s\g\p\8\x\n\b\t\j\g\o\o\i\r\h\d\n\h\u\1\n\1\o\x\r\3\v\b\y\p\7\z\7\7\7\g\r\4\d\j\9\g\v\1\x\9\y\z\2\v\k\o\6\z\a\6\p\8\l\9\7\h\f\d\n\6\y\u\3\v\y\w\g\k\6\8\6\m\o\8\y\3\l\e\k\g\u\e\n\9\t\d\2\0\c\5\p\g\y\w\2\5\n\1\s\5\f\p\f\r\j\j\e\q\p\9\x\5\8\q\k\n\u\n\f\i\u\0\2\k\z\3\h\q\2\l\t\y\3\k\1\5\s\0\z\p\m\q\4\j\r\y\d\2\f\v\b\a\j\a\q\9\s\z\f\l\3\1\o\4\0\1\x\w\9\6\4\x\4\o\9\j\i\x\6\n\p\j\s\1\d\7\0\o\c\m\b\n\t\v\k\e\q\m\q\s\a\m\2\s\g\u\q\u\5\b\0\3\9\k\g\p\0\g\c\a\g\k\z\h\m\e\8\1\o\a\t\u\t\x\3\z\r\x\z\0\n\2\t\8\s\i\v\0\v\r\w\k\v\c\e\1\h\p\5\w\5\n\y\c\6\h\c\i\e\x\5\h\k\e\f\s\8\p\9\4\q\s\4\o\i\4\f\u\j\2\v\d\9\b\w\u\w\w\s\7\s\s\3\c\1\u\4\f\a\h\l\p\6\8\l\b\6\y\h\l\r\q\e\v\l\9\p\t\9\3\u\4\g\v\9\l\3\h\m\q\u\s\9\7\6\8\5\k\z\f\u\y\f\c\o\6\u\c\1\z\e\q\1\4\j\s\e\s\m\h\1\3\9\3\0\8\l\x\6\j\0\l\e\l\z\z\n\1\y\j\y\s\b ]] 00:08:51.047 07:18:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:51.047 07:18:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:51.307 [2024-11-28 07:18:13.355895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:51.307 [2024-11-28 07:18:13.355993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70601 ] 00:08:51.307 [2024-11-28 07:18:13.487706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.307 [2024-11-28 07:18:13.572894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.570  [2024-11-28T07:18:14.104Z] Copying: 512/512 [B] (average 500 kBps) 00:08:51.829 00:08:51.830 07:18:13 -- dd/posix.sh@93 -- # [[ t4jo7v6im641nba6zfm6zd7humhvb9sl2uwathxc5w13t1zholoxuki7s6pqs28jvysjpjw7pwg2su4mfk8tdmu1iljdvdgvtga4xmxn2hmlwl9opsgp8xnbtjgooirhdnhu1n1oxr3vbyp7z777gr4dj9gv1x9yz2vko6za6p8l97hfdn6yu3vywgk686mo8y3lekguen9td20c5pgyw25n1s5fpfrjjeqp9x58qknunfiu02kz3hq2lty3k15s0zpmq4jryd2fvbajaq9szfl31o401xw964x4o9jix6npjs1d70ocmbntvkeqmqsam2sguqu5b039kgp0gcagkzhme81oatutx3zrxz0n2t8siv0vrwkvce1hp5w5nyc6hciex5hkefs8p94qs4oi4fuj2vd9bwuwws7ss3c1u4fahlp68lb6yhlrqevl9pt93u4gv9l3hmqus97685kzfuyfco6uc1zeq14jsesmh139308lx6j0lelzzn1yjysb == \t\4\j\o\7\v\6\i\m\6\4\1\n\b\a\6\z\f\m\6\z\d\7\h\u\m\h\v\b\9\s\l\2\u\w\a\t\h\x\c\5\w\1\3\t\1\z\h\o\l\o\x\u\k\i\7\s\6\p\q\s\2\8\j\v\y\s\j\p\j\w\7\p\w\g\2\s\u\4\m\f\k\8\t\d\m\u\1\i\l\j\d\v\d\g\v\t\g\a\4\x\m\x\n\2\h\m\l\w\l\9\o\p\s\g\p\8\x\n\b\t\j\g\o\o\i\r\h\d\n\h\u\1\n\1\o\x\r\3\v\b\y\p\7\z\7\7\7\g\r\4\d\j\9\g\v\1\x\9\y\z\2\v\k\o\6\z\a\6\p\8\l\9\7\h\f\d\n\6\y\u\3\v\y\w\g\k\6\8\6\m\o\8\y\3\l\e\k\g\u\e\n\9\t\d\2\0\c\5\p\g\y\w\2\5\n\1\s\5\f\p\f\r\j\j\e\q\p\9\x\5\8\q\k\n\u\n\f\i\u\0\2\k\z\3\h\q\2\l\t\y\3\k\1\5\s\0\z\p\m\q\4\j\r\y\d\2\f\v\b\a\j\a\q\9\s\z\f\l\3\1\o\4\0\1\x\w\9\6\4\x\4\o\9\j\i\x\6\n\p\j\s\1\d\7\0\o\c\m\b\n\t\v\k\e\q\m\q\s\a\m\2\s\g\u\q\u\5\b\0\3\9\k\g\p\0\g\c\a\g\k\z\h\m\e\8\1\o\a\t\u\t\x\3\z\r\x\z\0\n\2\t\8\s\i\v\0\v\r\w\k\v\c\e\1\h\p\5\w\5\n\y\c\6\h\c\i\e\x\5\h\k\e\f\s\8\p\9\4\q\s\4\o\i\4\f\u\j\2\v\d\9\b\w\u\w\w\s\7\s\s\3\c\1\u\4\f\a\h\l\p\6\8\l\b\6\y\h\l\r\q\e\v\l\9\p\t\9\3\u\4\g\v\9\l\3\h\m\q\u\s\9\7\6\8\5\k\z\f\u\y\f\c\o\6\u\c\1\z\e\q\1\4\j\s\e\s\m\h\1\3\9\3\0\8\l\x\6\j\0\l\e\l\z\z\n\1\y\j\y\s\b ]] 00:08:51.830 07:18:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:51.830 07:18:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:51.830 [2024-11-28 07:18:13.906935] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:51.830 [2024-11-28 07:18:13.907062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70614 ] 00:08:51.830 [2024-11-28 07:18:14.038503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.089 [2024-11-28 07:18:14.116578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.089  [2024-11-28T07:18:14.675Z] Copying: 512/512 [B] (average 166 kBps) 00:08:52.400 00:08:52.400 07:18:14 -- dd/posix.sh@93 -- # [[ t4jo7v6im641nba6zfm6zd7humhvb9sl2uwathxc5w13t1zholoxuki7s6pqs28jvysjpjw7pwg2su4mfk8tdmu1iljdvdgvtga4xmxn2hmlwl9opsgp8xnbtjgooirhdnhu1n1oxr3vbyp7z777gr4dj9gv1x9yz2vko6za6p8l97hfdn6yu3vywgk686mo8y3lekguen9td20c5pgyw25n1s5fpfrjjeqp9x58qknunfiu02kz3hq2lty3k15s0zpmq4jryd2fvbajaq9szfl31o401xw964x4o9jix6npjs1d70ocmbntvkeqmqsam2sguqu5b039kgp0gcagkzhme81oatutx3zrxz0n2t8siv0vrwkvce1hp5w5nyc6hciex5hkefs8p94qs4oi4fuj2vd9bwuwws7ss3c1u4fahlp68lb6yhlrqevl9pt93u4gv9l3hmqus97685kzfuyfco6uc1zeq14jsesmh139308lx6j0lelzzn1yjysb == \t\4\j\o\7\v\6\i\m\6\4\1\n\b\a\6\z\f\m\6\z\d\7\h\u\m\h\v\b\9\s\l\2\u\w\a\t\h\x\c\5\w\1\3\t\1\z\h\o\l\o\x\u\k\i\7\s\6\p\q\s\2\8\j\v\y\s\j\p\j\w\7\p\w\g\2\s\u\4\m\f\k\8\t\d\m\u\1\i\l\j\d\v\d\g\v\t\g\a\4\x\m\x\n\2\h\m\l\w\l\9\o\p\s\g\p\8\x\n\b\t\j\g\o\o\i\r\h\d\n\h\u\1\n\1\o\x\r\3\v\b\y\p\7\z\7\7\7\g\r\4\d\j\9\g\v\1\x\9\y\z\2\v\k\o\6\z\a\6\p\8\l\9\7\h\f\d\n\6\y\u\3\v\y\w\g\k\6\8\6\m\o\8\y\3\l\e\k\g\u\e\n\9\t\d\2\0\c\5\p\g\y\w\2\5\n\1\s\5\f\p\f\r\j\j\e\q\p\9\x\5\8\q\k\n\u\n\f\i\u\0\2\k\z\3\h\q\2\l\t\y\3\k\1\5\s\0\z\p\m\q\4\j\r\y\d\2\f\v\b\a\j\a\q\9\s\z\f\l\3\1\o\4\0\1\x\w\9\6\4\x\4\o\9\j\i\x\6\n\p\j\s\1\d\7\0\o\c\m\b\n\t\v\k\e\q\m\q\s\a\m\2\s\g\u\q\u\5\b\0\3\9\k\g\p\0\g\c\a\g\k\z\h\m\e\8\1\o\a\t\u\t\x\3\z\r\x\z\0\n\2\t\8\s\i\v\0\v\r\w\k\v\c\e\1\h\p\5\w\5\n\y\c\6\h\c\i\e\x\5\h\k\e\f\s\8\p\9\4\q\s\4\o\i\4\f\u\j\2\v\d\9\b\w\u\w\w\s\7\s\s\3\c\1\u\4\f\a\h\l\p\6\8\l\b\6\y\h\l\r\q\e\v\l\9\p\t\9\3\u\4\g\v\9\l\3\h\m\q\u\s\9\7\6\8\5\k\z\f\u\y\f\c\o\6\u\c\1\z\e\q\1\4\j\s\e\s\m\h\1\3\9\3\0\8\l\x\6\j\0\l\e\l\z\z\n\1\y\j\y\s\b ]] 00:08:52.400 07:18:14 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:52.400 07:18:14 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:52.400 [2024-11-28 07:18:14.465616] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:52.400 [2024-11-28 07:18:14.465745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70616 ] 00:08:52.400 [2024-11-28 07:18:14.600793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.659 [2024-11-28 07:18:14.682360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.659  [2024-11-28T07:18:15.193Z] Copying: 512/512 [B] (average 500 kBps) 00:08:52.918 00:08:52.918 07:18:14 -- dd/posix.sh@93 -- # [[ t4jo7v6im641nba6zfm6zd7humhvb9sl2uwathxc5w13t1zholoxuki7s6pqs28jvysjpjw7pwg2su4mfk8tdmu1iljdvdgvtga4xmxn2hmlwl9opsgp8xnbtjgooirhdnhu1n1oxr3vbyp7z777gr4dj9gv1x9yz2vko6za6p8l97hfdn6yu3vywgk686mo8y3lekguen9td20c5pgyw25n1s5fpfrjjeqp9x58qknunfiu02kz3hq2lty3k15s0zpmq4jryd2fvbajaq9szfl31o401xw964x4o9jix6npjs1d70ocmbntvkeqmqsam2sguqu5b039kgp0gcagkzhme81oatutx3zrxz0n2t8siv0vrwkvce1hp5w5nyc6hciex5hkefs8p94qs4oi4fuj2vd9bwuwws7ss3c1u4fahlp68lb6yhlrqevl9pt93u4gv9l3hmqus97685kzfuyfco6uc1zeq14jsesmh139308lx6j0lelzzn1yjysb == \t\4\j\o\7\v\6\i\m\6\4\1\n\b\a\6\z\f\m\6\z\d\7\h\u\m\h\v\b\9\s\l\2\u\w\a\t\h\x\c\5\w\1\3\t\1\z\h\o\l\o\x\u\k\i\7\s\6\p\q\s\2\8\j\v\y\s\j\p\j\w\7\p\w\g\2\s\u\4\m\f\k\8\t\d\m\u\1\i\l\j\d\v\d\g\v\t\g\a\4\x\m\x\n\2\h\m\l\w\l\9\o\p\s\g\p\8\x\n\b\t\j\g\o\o\i\r\h\d\n\h\u\1\n\1\o\x\r\3\v\b\y\p\7\z\7\7\7\g\r\4\d\j\9\g\v\1\x\9\y\z\2\v\k\o\6\z\a\6\p\8\l\9\7\h\f\d\n\6\y\u\3\v\y\w\g\k\6\8\6\m\o\8\y\3\l\e\k\g\u\e\n\9\t\d\2\0\c\5\p\g\y\w\2\5\n\1\s\5\f\p\f\r\j\j\e\q\p\9\x\5\8\q\k\n\u\n\f\i\u\0\2\k\z\3\h\q\2\l\t\y\3\k\1\5\s\0\z\p\m\q\4\j\r\y\d\2\f\v\b\a\j\a\q\9\s\z\f\l\3\1\o\4\0\1\x\w\9\6\4\x\4\o\9\j\i\x\6\n\p\j\s\1\d\7\0\o\c\m\b\n\t\v\k\e\q\m\q\s\a\m\2\s\g\u\q\u\5\b\0\3\9\k\g\p\0\g\c\a\g\k\z\h\m\e\8\1\o\a\t\u\t\x\3\z\r\x\z\0\n\2\t\8\s\i\v\0\v\r\w\k\v\c\e\1\h\p\5\w\5\n\y\c\6\h\c\i\e\x\5\h\k\e\f\s\8\p\9\4\q\s\4\o\i\4\f\u\j\2\v\d\9\b\w\u\w\w\s\7\s\s\3\c\1\u\4\f\a\h\l\p\6\8\l\b\6\y\h\l\r\q\e\v\l\9\p\t\9\3\u\4\g\v\9\l\3\h\m\q\u\s\9\7\6\8\5\k\z\f\u\y\f\c\o\6\u\c\1\z\e\q\1\4\j\s\e\s\m\h\1\3\9\3\0\8\l\x\6\j\0\l\e\l\z\z\n\1\y\j\y\s\b ]] 00:08:52.918 00:08:52.918 real 0m4.489s 00:08:52.918 user 0m2.408s 00:08:52.918 sys 0m1.097s 00:08:52.918 07:18:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:52.918 07:18:14 -- common/autotest_common.sh@10 -- # set +x 00:08:52.918 ************************************ 00:08:52.918 END TEST dd_flags_misc 00:08:52.918 ************************************ 00:08:52.918 07:18:15 -- dd/posix.sh@131 -- # tests_forced_aio 00:08:52.918 07:18:15 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:52.918 * Second test run, disabling liburing, forcing AIO 00:08:52.918 07:18:15 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:52.918 07:18:15 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:52.918 07:18:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:52.918 07:18:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:52.918 07:18:15 -- common/autotest_common.sh@10 -- # set +x 00:08:52.918 ************************************ 00:08:52.918 START TEST dd_flag_append_forced_aio 00:08:52.918 ************************************ 00:08:52.918 07:18:15 -- common/autotest_common.sh@1114 -- # append 00:08:52.918 07:18:15 -- dd/posix.sh@16 -- # local dump0 00:08:52.918 07:18:15 -- dd/posix.sh@17 -- # local dump1 00:08:52.918 07:18:15 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:52.918 07:18:15 -- dd/common.sh@98 -- # xtrace_disable 00:08:52.918 07:18:15 -- common/autotest_common.sh@10 -- # set +x 00:08:52.918 07:18:15 -- dd/posix.sh@19 -- # dump0=a8b9ueli04t3bb1wvi0u8k2tg59vrtrr 00:08:52.918 07:18:15 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:52.918 07:18:15 -- dd/common.sh@98 -- # xtrace_disable 00:08:52.918 07:18:15 -- common/autotest_common.sh@10 -- # set +x 00:08:52.918 07:18:15 -- dd/posix.sh@20 -- # dump1=xjkjyn8caxi08oyn5iicii1lgbf8a6kp 00:08:52.918 07:18:15 -- dd/posix.sh@22 -- # printf %s a8b9ueli04t3bb1wvi0u8k2tg59vrtrr 00:08:52.918 07:18:15 -- dd/posix.sh@23 -- # printf %s xjkjyn8caxi08oyn5iicii1lgbf8a6kp 00:08:52.918 07:18:15 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:52.918 [2024-11-28 07:18:15.085976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:52.918 [2024-11-28 07:18:15.086085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70648 ] 00:08:53.177 [2024-11-28 07:18:15.218669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.177 [2024-11-28 07:18:15.300719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.177  [2024-11-28T07:18:15.711Z] Copying: 32/32 [B] (average 31 kBps) 00:08:53.436 00:08:53.436 07:18:15 -- dd/posix.sh@27 -- # [[ xjkjyn8caxi08oyn5iicii1lgbf8a6kpa8b9ueli04t3bb1wvi0u8k2tg59vrtrr == \x\j\k\j\y\n\8\c\a\x\i\0\8\o\y\n\5\i\i\c\i\i\1\l\g\b\f\8\a\6\k\p\a\8\b\9\u\e\l\i\0\4\t\3\b\b\1\w\v\i\0\u\8\k\2\t\g\5\9\v\r\t\r\r ]] 00:08:53.436 00:08:53.436 real 0m0.559s 00:08:53.436 user 0m0.302s 00:08:53.436 sys 0m0.137s 00:08:53.436 07:18:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:53.436 ************************************ 00:08:53.436 END TEST dd_flag_append_forced_aio 00:08:53.436 ************************************ 00:08:53.436 07:18:15 -- common/autotest_common.sh@10 -- # set +x 00:08:53.436 07:18:15 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:53.436 07:18:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:53.436 07:18:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.436 07:18:15 -- common/autotest_common.sh@10 -- # set +x 00:08:53.436 ************************************ 00:08:53.436 START TEST dd_flag_directory_forced_aio 00:08:53.436 ************************************ 00:08:53.436 07:18:15 -- common/autotest_common.sh@1114 -- # directory 00:08:53.436 07:18:15 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:53.436 07:18:15 -- common/autotest_common.sh@650 -- # local es=0 00:08:53.436 07:18:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:53.436 07:18:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.436 07:18:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.436 07:18:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.436 07:18:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.436 07:18:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.436 07:18:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.436 07:18:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.436 07:18:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.436 07:18:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:53.436 [2024-11-28 07:18:15.680633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.436 [2024-11-28 07:18:15.680715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70674 ] 00:08:53.696 [2024-11-28 07:18:15.817121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.696 [2024-11-28 07:18:15.896081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.957 [2024-11-28 07:18:15.981892] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:53.957 [2024-11-28 07:18:15.981966] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:53.957 [2024-11-28 07:18:15.981995] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.957 [2024-11-28 07:18:16.093300] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:53.957 07:18:16 -- common/autotest_common.sh@653 -- # es=236 00:08:53.957 07:18:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.957 07:18:16 -- common/autotest_common.sh@662 -- # es=108 00:08:53.957 07:18:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:53.957 07:18:16 -- common/autotest_common.sh@670 -- # es=1 00:08:53.957 07:18:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.957 07:18:16 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:53.957 07:18:16 -- common/autotest_common.sh@650 -- # local es=0 00:08:53.957 07:18:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:53.957 07:18:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.957 07:18:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.957 07:18:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.957 07:18:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.957 07:18:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.957 07:18:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:53.957 07:18:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:53.957 07:18:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:53.957 07:18:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:53.957 [2024-11-28 07:18:16.225491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.957 [2024-11-28 07:18:16.225587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70684 ] 00:08:54.217 [2024-11-28 07:18:16.358932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.217 [2024-11-28 07:18:16.440741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.493 [2024-11-28 07:18:16.525267] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:54.493 [2024-11-28 07:18:16.525347] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:54.493 [2024-11-28 07:18:16.525364] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.493 [2024-11-28 07:18:16.634148] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:54.493 07:18:16 -- common/autotest_common.sh@653 -- # es=236 00:08:54.493 07:18:16 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:54.493 07:18:16 -- common/autotest_common.sh@662 -- # es=108 00:08:54.493 07:18:16 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:54.493 07:18:16 -- common/autotest_common.sh@670 -- # es=1 00:08:54.493 07:18:16 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:54.493 00:08:54.493 real 0m1.084s 00:08:54.493 user 0m0.590s 00:08:54.493 sys 0m0.282s 00:08:54.493 07:18:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:54.493 07:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:54.493 ************************************ 00:08:54.493 END TEST dd_flag_directory_forced_aio 00:08:54.493 ************************************ 00:08:54.493 07:18:16 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:54.493 07:18:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:54.493 07:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:54.493 07:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:54.493 ************************************ 00:08:54.493 START TEST dd_flag_nofollow_forced_aio 00:08:54.493 ************************************ 00:08:54.493 07:18:16 -- common/autotest_common.sh@1114 -- # nofollow 00:08:54.493 07:18:16 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:54.493 07:18:16 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:54.493 07:18:16 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:54.753 07:18:16 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:54.753 07:18:16 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:54.753 07:18:16 -- common/autotest_common.sh@650 -- # local es=0 00:08:54.753 07:18:16 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:54.753 07:18:16 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.753 07:18:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.753 07:18:16 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.753 07:18:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.753 07:18:16 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.753 07:18:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.753 07:18:16 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:54.753 07:18:16 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:54.753 07:18:16 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:54.753 [2024-11-28 07:18:16.819188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:54.753 [2024-11-28 07:18:16.819287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70718 ] 00:08:54.753 [2024-11-28 07:18:16.951068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.012 [2024-11-28 07:18:17.028497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.012 [2024-11-28 07:18:17.111283] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:55.012 [2024-11-28 07:18:17.111371] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:55.012 [2024-11-28 07:18:17.111388] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.012 [2024-11-28 07:18:17.217259] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:55.271 07:18:17 -- common/autotest_common.sh@653 -- # es=216 00:08:55.271 07:18:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.271 07:18:17 -- common/autotest_common.sh@662 -- # es=88 00:08:55.271 07:18:17 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:55.271 07:18:17 -- common/autotest_common.sh@670 -- # es=1 00:08:55.271 07:18:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.271 07:18:17 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:55.271 07:18:17 -- common/autotest_common.sh@650 -- # local es=0 00:08:55.271 07:18:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:55.271 07:18:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.271 07:18:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.271 07:18:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.271 07:18:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.271 07:18:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.271 07:18:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.271 07:18:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:55.271 07:18:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:55.271 07:18:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:55.272 [2024-11-28 07:18:17.343054] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:55.272 [2024-11-28 07:18:17.343154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70722 ] 00:08:55.272 [2024-11-28 07:18:17.475409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.531 [2024-11-28 07:18:17.561668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.531 [2024-11-28 07:18:17.645282] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:55.531 [2024-11-28 07:18:17.645356] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:55.531 [2024-11-28 07:18:17.645387] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.531 [2024-11-28 07:18:17.758930] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:55.790 07:18:17 -- common/autotest_common.sh@653 -- # es=216 00:08:55.790 07:18:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.790 07:18:17 -- common/autotest_common.sh@662 -- # es=88 00:08:55.790 07:18:17 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:55.790 07:18:17 -- common/autotest_common.sh@670 -- # es=1 00:08:55.790 07:18:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.790 07:18:17 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:55.790 07:18:17 -- dd/common.sh@98 -- # xtrace_disable 00:08:55.790 07:18:17 -- common/autotest_common.sh@10 -- # set +x 00:08:55.790 07:18:17 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:55.790 [2024-11-28 07:18:17.888741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:55.790 [2024-11-28 07:18:17.888831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70736 ] 00:08:55.790 [2024-11-28 07:18:18.019991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.048 [2024-11-28 07:18:18.110641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.048  [2024-11-28T07:18:18.581Z] Copying: 512/512 [B] (average 500 kBps) 00:08:56.306 00:08:56.306 07:18:18 -- dd/posix.sh@49 -- # [[ gkhvq9bpplqdjt5dueo0fc3fq3zdrdjoyf44tk13xfksd3iizg3fajnebdzdkza1m4trekyk70ff4rmenpwqkl6y6bgrkbsu3gqhdza6584xda73c3mkpxwi8fzp8i5afggk1zjr675wj00arl7pmyo0s28acnzhgz64wftzkwps4h079mjws1fptvc321dcrq22uu714ucg5e87482mxyrr010knki58nre87ydpv5ct9spfg5dsscu88rxi1iz25wcdcauv6jer5usupjzscktzpsbtv4xjziug3i2ophqeng46wjkpl3g9d48zysxvtpvcn8x5rz55skxjnxd1k6v5zdq5m33kukglr2r0ceiyhjmug7ypo0njp4wd2l68mz580a6jf8k1slwlpxv0dtatd574qettx5b8hl15axjtp1tqiihjtr2j08ilj5c26y857lyz39wxrta9pqfq5630e56pmc9qev1yzg94m9bgsretsddbmpky0pulp5m == \g\k\h\v\q\9\b\p\p\l\q\d\j\t\5\d\u\e\o\0\f\c\3\f\q\3\z\d\r\d\j\o\y\f\4\4\t\k\1\3\x\f\k\s\d\3\i\i\z\g\3\f\a\j\n\e\b\d\z\d\k\z\a\1\m\4\t\r\e\k\y\k\7\0\f\f\4\r\m\e\n\p\w\q\k\l\6\y\6\b\g\r\k\b\s\u\3\g\q\h\d\z\a\6\5\8\4\x\d\a\7\3\c\3\m\k\p\x\w\i\8\f\z\p\8\i\5\a\f\g\g\k\1\z\j\r\6\7\5\w\j\0\0\a\r\l\7\p\m\y\o\0\s\2\8\a\c\n\z\h\g\z\6\4\w\f\t\z\k\w\p\s\4\h\0\7\9\m\j\w\s\1\f\p\t\v\c\3\2\1\d\c\r\q\2\2\u\u\7\1\4\u\c\g\5\e\8\7\4\8\2\m\x\y\r\r\0\1\0\k\n\k\i\5\8\n\r\e\8\7\y\d\p\v\5\c\t\9\s\p\f\g\5\d\s\s\c\u\8\8\r\x\i\1\i\z\2\5\w\c\d\c\a\u\v\6\j\e\r\5\u\s\u\p\j\z\s\c\k\t\z\p\s\b\t\v\4\x\j\z\i\u\g\3\i\2\o\p\h\q\e\n\g\4\6\w\j\k\p\l\3\g\9\d\4\8\z\y\s\x\v\t\p\v\c\n\8\x\5\r\z\5\5\s\k\x\j\n\x\d\1\k\6\v\5\z\d\q\5\m\3\3\k\u\k\g\l\r\2\r\0\c\e\i\y\h\j\m\u\g\7\y\p\o\0\n\j\p\4\w\d\2\l\6\8\m\z\5\8\0\a\6\j\f\8\k\1\s\l\w\l\p\x\v\0\d\t\a\t\d\5\7\4\q\e\t\t\x\5\b\8\h\l\1\5\a\x\j\t\p\1\t\q\i\i\h\j\t\r\2\j\0\8\i\l\j\5\c\2\6\y\8\5\7\l\y\z\3\9\w\x\r\t\a\9\p\q\f\q\5\6\3\0\e\5\6\p\m\c\9\q\e\v\1\y\z\g\9\4\m\9\b\g\s\r\e\t\s\d\d\b\m\p\k\y\0\p\u\l\p\5\m ]] 00:08:56.306 00:08:56.306 real 0m1.648s 00:08:56.306 user 0m0.897s 00:08:56.306 sys 0m0.418s 00:08:56.306 07:18:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:56.306 ************************************ 00:08:56.306 END TEST dd_flag_nofollow_forced_aio 00:08:56.306 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:08:56.306 ************************************ 00:08:56.306 07:18:18 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:56.306 07:18:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:56.306 07:18:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.306 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:08:56.306 ************************************ 00:08:56.306 START TEST dd_flag_noatime_forced_aio 00:08:56.306 ************************************ 00:08:56.306 07:18:18 -- common/autotest_common.sh@1114 -- # noatime 00:08:56.306 07:18:18 -- dd/posix.sh@53 -- # local atime_if 00:08:56.306 07:18:18 -- dd/posix.sh@54 -- # local atime_of 00:08:56.306 07:18:18 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:56.306 07:18:18 -- dd/common.sh@98 -- # xtrace_disable 00:08:56.306 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:08:56.306 07:18:18 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:56.306 07:18:18 -- dd/posix.sh@60 -- # atime_if=1732778298 00:08:56.306 07:18:18 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:56.306 07:18:18 -- dd/posix.sh@61 -- # atime_of=1732778298 00:08:56.306 07:18:18 -- dd/posix.sh@66 -- # sleep 1 00:08:57.240 07:18:19 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:57.499 [2024-11-28 07:18:19.528160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:57.499 [2024-11-28 07:18:19.528261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70771 ] 00:08:57.499 [2024-11-28 07:18:19.659932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.499 [2024-11-28 07:18:19.750850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.758  [2024-11-28T07:18:20.292Z] Copying: 512/512 [B] (average 500 kBps) 00:08:58.017 00:08:58.017 07:18:20 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.017 07:18:20 -- dd/posix.sh@69 -- # (( atime_if == 1732778298 )) 00:08:58.017 07:18:20 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.017 07:18:20 -- dd/posix.sh@70 -- # (( atime_of == 1732778298 )) 00:08:58.017 07:18:20 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.017 [2024-11-28 07:18:20.094283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:58.017 [2024-11-28 07:18:20.094412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70788 ] 00:08:58.017 [2024-11-28 07:18:20.233443] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.276 [2024-11-28 07:18:20.317353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.276  [2024-11-28T07:18:20.810Z] Copying: 512/512 [B] (average 500 kBps) 00:08:58.535 00:08:58.535 07:18:20 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.535 07:18:20 -- dd/posix.sh@73 -- # (( atime_if < 1732778300 )) 00:08:58.535 00:08:58.535 real 0m2.159s 00:08:58.535 user 0m0.610s 00:08:58.535 sys 0m0.302s 00:08:58.535 07:18:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.535 ************************************ 00:08:58.535 END TEST dd_flag_noatime_forced_aio 00:08:58.535 07:18:20 -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 ************************************ 00:08:58.535 07:18:20 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:58.535 07:18:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:58.535 07:18:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.535 07:18:20 -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 ************************************ 00:08:58.535 START TEST dd_flags_misc_forced_aio 00:08:58.535 ************************************ 00:08:58.535 07:18:20 -- common/autotest_common.sh@1114 -- # io 00:08:58.535 07:18:20 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:58.535 07:18:20 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:58.535 07:18:20 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:58.535 07:18:20 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:58.535 07:18:20 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:58.535 07:18:20 -- dd/common.sh@98 -- # xtrace_disable 00:08:58.535 07:18:20 -- common/autotest_common.sh@10 -- # set +x 00:08:58.535 07:18:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:58.535 07:18:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:58.535 [2024-11-28 07:18:20.724726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:58.535 [2024-11-28 07:18:20.724868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70809 ] 00:08:58.794 [2024-11-28 07:18:20.858784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.794 [2024-11-28 07:18:20.934109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.794  [2024-11-28T07:18:21.328Z] Copying: 512/512 [B] (average 500 kBps) 00:08:59.053 00:08:59.053 07:18:21 -- dd/posix.sh@93 -- # [[ 4fqt30jcs0tetoqc21pin4cu8f3861yscfw753wf4ykholrygi0u1574q9xs3jzqkpxvfc18ccisazq0yzsnwcxr91zehq87pxsmuprrcv2v159jstpt1so4hla858vajjbz7wa40q3ncjv5ci0bn7pjir4cpwoyx507oxq40pe3dme1x8ldcbsbos0e7fjbo04w2s6gkv9k554znnh2jg1gb4h17nald7pzsaya4er5t5e4pg9ueuhrd92l4genxl91loldarsprrdgjfdtewi4ay8ohf0emp2tfkag3lg8ck393uvvj379bmhgprl3lbo0ifot4wtbvwiya6blfeehv51wjqx1g6hg0nssof8th63vr87a0dv6yggzubpbc8ir1fkyi8yierbgvy5q54q1x5cege03jb462fd3ldgz4e9dl5v4h8xlpmis6828v6vij62dortlv3wsh22xup5r3nc5d75iycna441t34k1sompx9fbdq3bgx1hirul == \4\f\q\t\3\0\j\c\s\0\t\e\t\o\q\c\2\1\p\i\n\4\c\u\8\f\3\8\6\1\y\s\c\f\w\7\5\3\w\f\4\y\k\h\o\l\r\y\g\i\0\u\1\5\7\4\q\9\x\s\3\j\z\q\k\p\x\v\f\c\1\8\c\c\i\s\a\z\q\0\y\z\s\n\w\c\x\r\9\1\z\e\h\q\8\7\p\x\s\m\u\p\r\r\c\v\2\v\1\5\9\j\s\t\p\t\1\s\o\4\h\l\a\8\5\8\v\a\j\j\b\z\7\w\a\4\0\q\3\n\c\j\v\5\c\i\0\b\n\7\p\j\i\r\4\c\p\w\o\y\x\5\0\7\o\x\q\4\0\p\e\3\d\m\e\1\x\8\l\d\c\b\s\b\o\s\0\e\7\f\j\b\o\0\4\w\2\s\6\g\k\v\9\k\5\5\4\z\n\n\h\2\j\g\1\g\b\4\h\1\7\n\a\l\d\7\p\z\s\a\y\a\4\e\r\5\t\5\e\4\p\g\9\u\e\u\h\r\d\9\2\l\4\g\e\n\x\l\9\1\l\o\l\d\a\r\s\p\r\r\d\g\j\f\d\t\e\w\i\4\a\y\8\o\h\f\0\e\m\p\2\t\f\k\a\g\3\l\g\8\c\k\3\9\3\u\v\v\j\3\7\9\b\m\h\g\p\r\l\3\l\b\o\0\i\f\o\t\4\w\t\b\v\w\i\y\a\6\b\l\f\e\e\h\v\5\1\w\j\q\x\1\g\6\h\g\0\n\s\s\o\f\8\t\h\6\3\v\r\8\7\a\0\d\v\6\y\g\g\z\u\b\p\b\c\8\i\r\1\f\k\y\i\8\y\i\e\r\b\g\v\y\5\q\5\4\q\1\x\5\c\e\g\e\0\3\j\b\4\6\2\f\d\3\l\d\g\z\4\e\9\d\l\5\v\4\h\8\x\l\p\m\i\s\6\8\2\8\v\6\v\i\j\6\2\d\o\r\t\l\v\3\w\s\h\2\2\x\u\p\5\r\3\n\c\5\d\7\5\i\y\c\n\a\4\4\1\t\3\4\k\1\s\o\m\p\x\9\f\b\d\q\3\b\g\x\1\h\i\r\u\l ]] 00:08:59.053 07:18:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:59.053 07:18:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:59.053 [2024-11-28 07:18:21.260151] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:59.053 [2024-11-28 07:18:21.260264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70822 ] 00:08:59.312 [2024-11-28 07:18:21.391302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.312 [2024-11-28 07:18:21.474988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.312  [2024-11-28T07:18:21.846Z] Copying: 512/512 [B] (average 500 kBps) 00:08:59.571 00:08:59.571 07:18:21 -- dd/posix.sh@93 -- # [[ 4fqt30jcs0tetoqc21pin4cu8f3861yscfw753wf4ykholrygi0u1574q9xs3jzqkpxvfc18ccisazq0yzsnwcxr91zehq87pxsmuprrcv2v159jstpt1so4hla858vajjbz7wa40q3ncjv5ci0bn7pjir4cpwoyx507oxq40pe3dme1x8ldcbsbos0e7fjbo04w2s6gkv9k554znnh2jg1gb4h17nald7pzsaya4er5t5e4pg9ueuhrd92l4genxl91loldarsprrdgjfdtewi4ay8ohf0emp2tfkag3lg8ck393uvvj379bmhgprl3lbo0ifot4wtbvwiya6blfeehv51wjqx1g6hg0nssof8th63vr87a0dv6yggzubpbc8ir1fkyi8yierbgvy5q54q1x5cege03jb462fd3ldgz4e9dl5v4h8xlpmis6828v6vij62dortlv3wsh22xup5r3nc5d75iycna441t34k1sompx9fbdq3bgx1hirul == \4\f\q\t\3\0\j\c\s\0\t\e\t\o\q\c\2\1\p\i\n\4\c\u\8\f\3\8\6\1\y\s\c\f\w\7\5\3\w\f\4\y\k\h\o\l\r\y\g\i\0\u\1\5\7\4\q\9\x\s\3\j\z\q\k\p\x\v\f\c\1\8\c\c\i\s\a\z\q\0\y\z\s\n\w\c\x\r\9\1\z\e\h\q\8\7\p\x\s\m\u\p\r\r\c\v\2\v\1\5\9\j\s\t\p\t\1\s\o\4\h\l\a\8\5\8\v\a\j\j\b\z\7\w\a\4\0\q\3\n\c\j\v\5\c\i\0\b\n\7\p\j\i\r\4\c\p\w\o\y\x\5\0\7\o\x\q\4\0\p\e\3\d\m\e\1\x\8\l\d\c\b\s\b\o\s\0\e\7\f\j\b\o\0\4\w\2\s\6\g\k\v\9\k\5\5\4\z\n\n\h\2\j\g\1\g\b\4\h\1\7\n\a\l\d\7\p\z\s\a\y\a\4\e\r\5\t\5\e\4\p\g\9\u\e\u\h\r\d\9\2\l\4\g\e\n\x\l\9\1\l\o\l\d\a\r\s\p\r\r\d\g\j\f\d\t\e\w\i\4\a\y\8\o\h\f\0\e\m\p\2\t\f\k\a\g\3\l\g\8\c\k\3\9\3\u\v\v\j\3\7\9\b\m\h\g\p\r\l\3\l\b\o\0\i\f\o\t\4\w\t\b\v\w\i\y\a\6\b\l\f\e\e\h\v\5\1\w\j\q\x\1\g\6\h\g\0\n\s\s\o\f\8\t\h\6\3\v\r\8\7\a\0\d\v\6\y\g\g\z\u\b\p\b\c\8\i\r\1\f\k\y\i\8\y\i\e\r\b\g\v\y\5\q\5\4\q\1\x\5\c\e\g\e\0\3\j\b\4\6\2\f\d\3\l\d\g\z\4\e\9\d\l\5\v\4\h\8\x\l\p\m\i\s\6\8\2\8\v\6\v\i\j\6\2\d\o\r\t\l\v\3\w\s\h\2\2\x\u\p\5\r\3\n\c\5\d\7\5\i\y\c\n\a\4\4\1\t\3\4\k\1\s\o\m\p\x\9\f\b\d\q\3\b\g\x\1\h\i\r\u\l ]] 00:08:59.571 07:18:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:59.571 07:18:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:59.571 [2024-11-28 07:18:21.795612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:59.571 [2024-11-28 07:18:21.795720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70824 ] 00:08:59.830 [2024-11-28 07:18:21.929930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.830 [2024-11-28 07:18:22.010161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.830  [2024-11-28T07:18:22.365Z] Copying: 512/512 [B] (average 125 kBps) 00:09:00.090 00:09:00.090 07:18:22 -- dd/posix.sh@93 -- # [[ 4fqt30jcs0tetoqc21pin4cu8f3861yscfw753wf4ykholrygi0u1574q9xs3jzqkpxvfc18ccisazq0yzsnwcxr91zehq87pxsmuprrcv2v159jstpt1so4hla858vajjbz7wa40q3ncjv5ci0bn7pjir4cpwoyx507oxq40pe3dme1x8ldcbsbos0e7fjbo04w2s6gkv9k554znnh2jg1gb4h17nald7pzsaya4er5t5e4pg9ueuhrd92l4genxl91loldarsprrdgjfdtewi4ay8ohf0emp2tfkag3lg8ck393uvvj379bmhgprl3lbo0ifot4wtbvwiya6blfeehv51wjqx1g6hg0nssof8th63vr87a0dv6yggzubpbc8ir1fkyi8yierbgvy5q54q1x5cege03jb462fd3ldgz4e9dl5v4h8xlpmis6828v6vij62dortlv3wsh22xup5r3nc5d75iycna441t34k1sompx9fbdq3bgx1hirul == \4\f\q\t\3\0\j\c\s\0\t\e\t\o\q\c\2\1\p\i\n\4\c\u\8\f\3\8\6\1\y\s\c\f\w\7\5\3\w\f\4\y\k\h\o\l\r\y\g\i\0\u\1\5\7\4\q\9\x\s\3\j\z\q\k\p\x\v\f\c\1\8\c\c\i\s\a\z\q\0\y\z\s\n\w\c\x\r\9\1\z\e\h\q\8\7\p\x\s\m\u\p\r\r\c\v\2\v\1\5\9\j\s\t\p\t\1\s\o\4\h\l\a\8\5\8\v\a\j\j\b\z\7\w\a\4\0\q\3\n\c\j\v\5\c\i\0\b\n\7\p\j\i\r\4\c\p\w\o\y\x\5\0\7\o\x\q\4\0\p\e\3\d\m\e\1\x\8\l\d\c\b\s\b\o\s\0\e\7\f\j\b\o\0\4\w\2\s\6\g\k\v\9\k\5\5\4\z\n\n\h\2\j\g\1\g\b\4\h\1\7\n\a\l\d\7\p\z\s\a\y\a\4\e\r\5\t\5\e\4\p\g\9\u\e\u\h\r\d\9\2\l\4\g\e\n\x\l\9\1\l\o\l\d\a\r\s\p\r\r\d\g\j\f\d\t\e\w\i\4\a\y\8\o\h\f\0\e\m\p\2\t\f\k\a\g\3\l\g\8\c\k\3\9\3\u\v\v\j\3\7\9\b\m\h\g\p\r\l\3\l\b\o\0\i\f\o\t\4\w\t\b\v\w\i\y\a\6\b\l\f\e\e\h\v\5\1\w\j\q\x\1\g\6\h\g\0\n\s\s\o\f\8\t\h\6\3\v\r\8\7\a\0\d\v\6\y\g\g\z\u\b\p\b\c\8\i\r\1\f\k\y\i\8\y\i\e\r\b\g\v\y\5\q\5\4\q\1\x\5\c\e\g\e\0\3\j\b\4\6\2\f\d\3\l\d\g\z\4\e\9\d\l\5\v\4\h\8\x\l\p\m\i\s\6\8\2\8\v\6\v\i\j\6\2\d\o\r\t\l\v\3\w\s\h\2\2\x\u\p\5\r\3\n\c\5\d\7\5\i\y\c\n\a\4\4\1\t\3\4\k\1\s\o\m\p\x\9\f\b\d\q\3\b\g\x\1\h\i\r\u\l ]] 00:09:00.090 07:18:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:00.090 07:18:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:00.349 [2024-11-28 07:18:22.365654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.349 [2024-11-28 07:18:22.365754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70837 ] 00:09:00.349 [2024-11-28 07:18:22.504077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.349 [2024-11-28 07:18:22.564895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.608  [2024-11-28T07:18:22.883Z] Copying: 512/512 [B] (average 250 kBps) 00:09:00.608 00:09:00.608 07:18:22 -- dd/posix.sh@93 -- # [[ 4fqt30jcs0tetoqc21pin4cu8f3861yscfw753wf4ykholrygi0u1574q9xs3jzqkpxvfc18ccisazq0yzsnwcxr91zehq87pxsmuprrcv2v159jstpt1so4hla858vajjbz7wa40q3ncjv5ci0bn7pjir4cpwoyx507oxq40pe3dme1x8ldcbsbos0e7fjbo04w2s6gkv9k554znnh2jg1gb4h17nald7pzsaya4er5t5e4pg9ueuhrd92l4genxl91loldarsprrdgjfdtewi4ay8ohf0emp2tfkag3lg8ck393uvvj379bmhgprl3lbo0ifot4wtbvwiya6blfeehv51wjqx1g6hg0nssof8th63vr87a0dv6yggzubpbc8ir1fkyi8yierbgvy5q54q1x5cege03jb462fd3ldgz4e9dl5v4h8xlpmis6828v6vij62dortlv3wsh22xup5r3nc5d75iycna441t34k1sompx9fbdq3bgx1hirul == \4\f\q\t\3\0\j\c\s\0\t\e\t\o\q\c\2\1\p\i\n\4\c\u\8\f\3\8\6\1\y\s\c\f\w\7\5\3\w\f\4\y\k\h\o\l\r\y\g\i\0\u\1\5\7\4\q\9\x\s\3\j\z\q\k\p\x\v\f\c\1\8\c\c\i\s\a\z\q\0\y\z\s\n\w\c\x\r\9\1\z\e\h\q\8\7\p\x\s\m\u\p\r\r\c\v\2\v\1\5\9\j\s\t\p\t\1\s\o\4\h\l\a\8\5\8\v\a\j\j\b\z\7\w\a\4\0\q\3\n\c\j\v\5\c\i\0\b\n\7\p\j\i\r\4\c\p\w\o\y\x\5\0\7\o\x\q\4\0\p\e\3\d\m\e\1\x\8\l\d\c\b\s\b\o\s\0\e\7\f\j\b\o\0\4\w\2\s\6\g\k\v\9\k\5\5\4\z\n\n\h\2\j\g\1\g\b\4\h\1\7\n\a\l\d\7\p\z\s\a\y\a\4\e\r\5\t\5\e\4\p\g\9\u\e\u\h\r\d\9\2\l\4\g\e\n\x\l\9\1\l\o\l\d\a\r\s\p\r\r\d\g\j\f\d\t\e\w\i\4\a\y\8\o\h\f\0\e\m\p\2\t\f\k\a\g\3\l\g\8\c\k\3\9\3\u\v\v\j\3\7\9\b\m\h\g\p\r\l\3\l\b\o\0\i\f\o\t\4\w\t\b\v\w\i\y\a\6\b\l\f\e\e\h\v\5\1\w\j\q\x\1\g\6\h\g\0\n\s\s\o\f\8\t\h\6\3\v\r\8\7\a\0\d\v\6\y\g\g\z\u\b\p\b\c\8\i\r\1\f\k\y\i\8\y\i\e\r\b\g\v\y\5\q\5\4\q\1\x\5\c\e\g\e\0\3\j\b\4\6\2\f\d\3\l\d\g\z\4\e\9\d\l\5\v\4\h\8\x\l\p\m\i\s\6\8\2\8\v\6\v\i\j\6\2\d\o\r\t\l\v\3\w\s\h\2\2\x\u\p\5\r\3\n\c\5\d\7\5\i\y\c\n\a\4\4\1\t\3\4\k\1\s\o\m\p\x\9\f\b\d\q\3\b\g\x\1\h\i\r\u\l ]] 00:09:00.608 07:18:22 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:00.608 07:18:22 -- dd/posix.sh@86 -- # gen_bytes 512 00:09:00.608 07:18:22 -- dd/common.sh@98 -- # xtrace_disable 00:09:00.608 07:18:22 -- common/autotest_common.sh@10 -- # set +x 00:09:00.608 07:18:22 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:00.608 07:18:22 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:00.608 [2024-11-28 07:18:22.879182] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:00.608 [2024-11-28 07:18:22.879291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70845 ] 00:09:00.867 [2024-11-28 07:18:23.009385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.867 [2024-11-28 07:18:23.071886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.126  [2024-11-28T07:18:23.401Z] Copying: 512/512 [B] (average 500 kBps) 00:09:01.126 00:09:01.126 07:18:23 -- dd/posix.sh@93 -- # [[ 1ey1zkse0i40fa2j5f6v2d2ijg8q9x5kfi5nqf7b3h867oqbga2z15e8y4cnh4f88tktyc3lw81kfnw35ys7y6xjk5rezqh3drfrhqxj94ds5le0551w8tsndq319drmj73ggmuymod2rnkm3v4rw5upywf6kac8df54vzmron2qbdkaytiduoj1nbwuajfky4ic17yb0bsvnkz76qylgfjdg33157ctw42l07fc2s5823ynjg7yzij4m7nidsh7bfcfpi6e65drrm02ko46v56cciv82he4msgafrawd9hhgq8am8pn225oe2sv68ei50y0b12l8xxg11whv7v5x144uvijyltjy2hdggua73279wgp2xmys00hvvsx438bifqrkslme0prq44mwm4zbnorw8e094rj7drugg1g9dcfcl7vyla7mw93zzwgvbxswkaj08sd20ftp1c9w3th1qsxbr0vf0sp93c9pwidnbvdrex1995xdiicryzv2dky == \1\e\y\1\z\k\s\e\0\i\4\0\f\a\2\j\5\f\6\v\2\d\2\i\j\g\8\q\9\x\5\k\f\i\5\n\q\f\7\b\3\h\8\6\7\o\q\b\g\a\2\z\1\5\e\8\y\4\c\n\h\4\f\8\8\t\k\t\y\c\3\l\w\8\1\k\f\n\w\3\5\y\s\7\y\6\x\j\k\5\r\e\z\q\h\3\d\r\f\r\h\q\x\j\9\4\d\s\5\l\e\0\5\5\1\w\8\t\s\n\d\q\3\1\9\d\r\m\j\7\3\g\g\m\u\y\m\o\d\2\r\n\k\m\3\v\4\r\w\5\u\p\y\w\f\6\k\a\c\8\d\f\5\4\v\z\m\r\o\n\2\q\b\d\k\a\y\t\i\d\u\o\j\1\n\b\w\u\a\j\f\k\y\4\i\c\1\7\y\b\0\b\s\v\n\k\z\7\6\q\y\l\g\f\j\d\g\3\3\1\5\7\c\t\w\4\2\l\0\7\f\c\2\s\5\8\2\3\y\n\j\g\7\y\z\i\j\4\m\7\n\i\d\s\h\7\b\f\c\f\p\i\6\e\6\5\d\r\r\m\0\2\k\o\4\6\v\5\6\c\c\i\v\8\2\h\e\4\m\s\g\a\f\r\a\w\d\9\h\h\g\q\8\a\m\8\p\n\2\2\5\o\e\2\s\v\6\8\e\i\5\0\y\0\b\1\2\l\8\x\x\g\1\1\w\h\v\7\v\5\x\1\4\4\u\v\i\j\y\l\t\j\y\2\h\d\g\g\u\a\7\3\2\7\9\w\g\p\2\x\m\y\s\0\0\h\v\v\s\x\4\3\8\b\i\f\q\r\k\s\l\m\e\0\p\r\q\4\4\m\w\m\4\z\b\n\o\r\w\8\e\0\9\4\r\j\7\d\r\u\g\g\1\g\9\d\c\f\c\l\7\v\y\l\a\7\m\w\9\3\z\z\w\g\v\b\x\s\w\k\a\j\0\8\s\d\2\0\f\t\p\1\c\9\w\3\t\h\1\q\s\x\b\r\0\v\f\0\s\p\9\3\c\9\p\w\i\d\n\b\v\d\r\e\x\1\9\9\5\x\d\i\i\c\r\y\z\v\2\d\k\y ]] 00:09:01.126 07:18:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:01.126 07:18:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:01.126 [2024-11-28 07:18:23.395010] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:01.126 [2024-11-28 07:18:23.395123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70852 ] 00:09:01.386 [2024-11-28 07:18:23.525796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.386 [2024-11-28 07:18:23.580915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.386  [2024-11-28T07:18:23.920Z] Copying: 512/512 [B] (average 500 kBps) 00:09:01.645 00:09:01.646 07:18:23 -- dd/posix.sh@93 -- # [[ 1ey1zkse0i40fa2j5f6v2d2ijg8q9x5kfi5nqf7b3h867oqbga2z15e8y4cnh4f88tktyc3lw81kfnw35ys7y6xjk5rezqh3drfrhqxj94ds5le0551w8tsndq319drmj73ggmuymod2rnkm3v4rw5upywf6kac8df54vzmron2qbdkaytiduoj1nbwuajfky4ic17yb0bsvnkz76qylgfjdg33157ctw42l07fc2s5823ynjg7yzij4m7nidsh7bfcfpi6e65drrm02ko46v56cciv82he4msgafrawd9hhgq8am8pn225oe2sv68ei50y0b12l8xxg11whv7v5x144uvijyltjy2hdggua73279wgp2xmys00hvvsx438bifqrkslme0prq44mwm4zbnorw8e094rj7drugg1g9dcfcl7vyla7mw93zzwgvbxswkaj08sd20ftp1c9w3th1qsxbr0vf0sp93c9pwidnbvdrex1995xdiicryzv2dky == \1\e\y\1\z\k\s\e\0\i\4\0\f\a\2\j\5\f\6\v\2\d\2\i\j\g\8\q\9\x\5\k\f\i\5\n\q\f\7\b\3\h\8\6\7\o\q\b\g\a\2\z\1\5\e\8\y\4\c\n\h\4\f\8\8\t\k\t\y\c\3\l\w\8\1\k\f\n\w\3\5\y\s\7\y\6\x\j\k\5\r\e\z\q\h\3\d\r\f\r\h\q\x\j\9\4\d\s\5\l\e\0\5\5\1\w\8\t\s\n\d\q\3\1\9\d\r\m\j\7\3\g\g\m\u\y\m\o\d\2\r\n\k\m\3\v\4\r\w\5\u\p\y\w\f\6\k\a\c\8\d\f\5\4\v\z\m\r\o\n\2\q\b\d\k\a\y\t\i\d\u\o\j\1\n\b\w\u\a\j\f\k\y\4\i\c\1\7\y\b\0\b\s\v\n\k\z\7\6\q\y\l\g\f\j\d\g\3\3\1\5\7\c\t\w\4\2\l\0\7\f\c\2\s\5\8\2\3\y\n\j\g\7\y\z\i\j\4\m\7\n\i\d\s\h\7\b\f\c\f\p\i\6\e\6\5\d\r\r\m\0\2\k\o\4\6\v\5\6\c\c\i\v\8\2\h\e\4\m\s\g\a\f\r\a\w\d\9\h\h\g\q\8\a\m\8\p\n\2\2\5\o\e\2\s\v\6\8\e\i\5\0\y\0\b\1\2\l\8\x\x\g\1\1\w\h\v\7\v\5\x\1\4\4\u\v\i\j\y\l\t\j\y\2\h\d\g\g\u\a\7\3\2\7\9\w\g\p\2\x\m\y\s\0\0\h\v\v\s\x\4\3\8\b\i\f\q\r\k\s\l\m\e\0\p\r\q\4\4\m\w\m\4\z\b\n\o\r\w\8\e\0\9\4\r\j\7\d\r\u\g\g\1\g\9\d\c\f\c\l\7\v\y\l\a\7\m\w\9\3\z\z\w\g\v\b\x\s\w\k\a\j\0\8\s\d\2\0\f\t\p\1\c\9\w\3\t\h\1\q\s\x\b\r\0\v\f\0\s\p\9\3\c\9\p\w\i\d\n\b\v\d\r\e\x\1\9\9\5\x\d\i\i\c\r\y\z\v\2\d\k\y ]] 00:09:01.646 07:18:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:01.646 07:18:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:01.904 [2024-11-28 07:18:23.924743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:01.904 [2024-11-28 07:18:23.924866] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70860 ] 00:09:01.904 [2024-11-28 07:18:24.063435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.904 [2024-11-28 07:18:24.132984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.163  [2024-11-28T07:18:24.438Z] Copying: 512/512 [B] (average 500 kBps) 00:09:02.163 00:09:02.163 07:18:24 -- dd/posix.sh@93 -- # [[ 1ey1zkse0i40fa2j5f6v2d2ijg8q9x5kfi5nqf7b3h867oqbga2z15e8y4cnh4f88tktyc3lw81kfnw35ys7y6xjk5rezqh3drfrhqxj94ds5le0551w8tsndq319drmj73ggmuymod2rnkm3v4rw5upywf6kac8df54vzmron2qbdkaytiduoj1nbwuajfky4ic17yb0bsvnkz76qylgfjdg33157ctw42l07fc2s5823ynjg7yzij4m7nidsh7bfcfpi6e65drrm02ko46v56cciv82he4msgafrawd9hhgq8am8pn225oe2sv68ei50y0b12l8xxg11whv7v5x144uvijyltjy2hdggua73279wgp2xmys00hvvsx438bifqrkslme0prq44mwm4zbnorw8e094rj7drugg1g9dcfcl7vyla7mw93zzwgvbxswkaj08sd20ftp1c9w3th1qsxbr0vf0sp93c9pwidnbvdrex1995xdiicryzv2dky == \1\e\y\1\z\k\s\e\0\i\4\0\f\a\2\j\5\f\6\v\2\d\2\i\j\g\8\q\9\x\5\k\f\i\5\n\q\f\7\b\3\h\8\6\7\o\q\b\g\a\2\z\1\5\e\8\y\4\c\n\h\4\f\8\8\t\k\t\y\c\3\l\w\8\1\k\f\n\w\3\5\y\s\7\y\6\x\j\k\5\r\e\z\q\h\3\d\r\f\r\h\q\x\j\9\4\d\s\5\l\e\0\5\5\1\w\8\t\s\n\d\q\3\1\9\d\r\m\j\7\3\g\g\m\u\y\m\o\d\2\r\n\k\m\3\v\4\r\w\5\u\p\y\w\f\6\k\a\c\8\d\f\5\4\v\z\m\r\o\n\2\q\b\d\k\a\y\t\i\d\u\o\j\1\n\b\w\u\a\j\f\k\y\4\i\c\1\7\y\b\0\b\s\v\n\k\z\7\6\q\y\l\g\f\j\d\g\3\3\1\5\7\c\t\w\4\2\l\0\7\f\c\2\s\5\8\2\3\y\n\j\g\7\y\z\i\j\4\m\7\n\i\d\s\h\7\b\f\c\f\p\i\6\e\6\5\d\r\r\m\0\2\k\o\4\6\v\5\6\c\c\i\v\8\2\h\e\4\m\s\g\a\f\r\a\w\d\9\h\h\g\q\8\a\m\8\p\n\2\2\5\o\e\2\s\v\6\8\e\i\5\0\y\0\b\1\2\l\8\x\x\g\1\1\w\h\v\7\v\5\x\1\4\4\u\v\i\j\y\l\t\j\y\2\h\d\g\g\u\a\7\3\2\7\9\w\g\p\2\x\m\y\s\0\0\h\v\v\s\x\4\3\8\b\i\f\q\r\k\s\l\m\e\0\p\r\q\4\4\m\w\m\4\z\b\n\o\r\w\8\e\0\9\4\r\j\7\d\r\u\g\g\1\g\9\d\c\f\c\l\7\v\y\l\a\7\m\w\9\3\z\z\w\g\v\b\x\s\w\k\a\j\0\8\s\d\2\0\f\t\p\1\c\9\w\3\t\h\1\q\s\x\b\r\0\v\f\0\s\p\9\3\c\9\p\w\i\d\n\b\v\d\r\e\x\1\9\9\5\x\d\i\i\c\r\y\z\v\2\d\k\y ]] 00:09:02.163 07:18:24 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:02.163 07:18:24 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:02.422 [2024-11-28 07:18:24.446234] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:02.422 [2024-11-28 07:18:24.446370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70867 ] 00:09:02.422 [2024-11-28 07:18:24.579656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.422 [2024-11-28 07:18:24.653520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.681  [2024-11-28T07:18:24.956Z] Copying: 512/512 [B] (average 250 kBps) 00:09:02.681 00:09:02.956 07:18:24 -- dd/posix.sh@93 -- # [[ 1ey1zkse0i40fa2j5f6v2d2ijg8q9x5kfi5nqf7b3h867oqbga2z15e8y4cnh4f88tktyc3lw81kfnw35ys7y6xjk5rezqh3drfrhqxj94ds5le0551w8tsndq319drmj73ggmuymod2rnkm3v4rw5upywf6kac8df54vzmron2qbdkaytiduoj1nbwuajfky4ic17yb0bsvnkz76qylgfjdg33157ctw42l07fc2s5823ynjg7yzij4m7nidsh7bfcfpi6e65drrm02ko46v56cciv82he4msgafrawd9hhgq8am8pn225oe2sv68ei50y0b12l8xxg11whv7v5x144uvijyltjy2hdggua73279wgp2xmys00hvvsx438bifqrkslme0prq44mwm4zbnorw8e094rj7drugg1g9dcfcl7vyla7mw93zzwgvbxswkaj08sd20ftp1c9w3th1qsxbr0vf0sp93c9pwidnbvdrex1995xdiicryzv2dky == \1\e\y\1\z\k\s\e\0\i\4\0\f\a\2\j\5\f\6\v\2\d\2\i\j\g\8\q\9\x\5\k\f\i\5\n\q\f\7\b\3\h\8\6\7\o\q\b\g\a\2\z\1\5\e\8\y\4\c\n\h\4\f\8\8\t\k\t\y\c\3\l\w\8\1\k\f\n\w\3\5\y\s\7\y\6\x\j\k\5\r\e\z\q\h\3\d\r\f\r\h\q\x\j\9\4\d\s\5\l\e\0\5\5\1\w\8\t\s\n\d\q\3\1\9\d\r\m\j\7\3\g\g\m\u\y\m\o\d\2\r\n\k\m\3\v\4\r\w\5\u\p\y\w\f\6\k\a\c\8\d\f\5\4\v\z\m\r\o\n\2\q\b\d\k\a\y\t\i\d\u\o\j\1\n\b\w\u\a\j\f\k\y\4\i\c\1\7\y\b\0\b\s\v\n\k\z\7\6\q\y\l\g\f\j\d\g\3\3\1\5\7\c\t\w\4\2\l\0\7\f\c\2\s\5\8\2\3\y\n\j\g\7\y\z\i\j\4\m\7\n\i\d\s\h\7\b\f\c\f\p\i\6\e\6\5\d\r\r\m\0\2\k\o\4\6\v\5\6\c\c\i\v\8\2\h\e\4\m\s\g\a\f\r\a\w\d\9\h\h\g\q\8\a\m\8\p\n\2\2\5\o\e\2\s\v\6\8\e\i\5\0\y\0\b\1\2\l\8\x\x\g\1\1\w\h\v\7\v\5\x\1\4\4\u\v\i\j\y\l\t\j\y\2\h\d\g\g\u\a\7\3\2\7\9\w\g\p\2\x\m\y\s\0\0\h\v\v\s\x\4\3\8\b\i\f\q\r\k\s\l\m\e\0\p\r\q\4\4\m\w\m\4\z\b\n\o\r\w\8\e\0\9\4\r\j\7\d\r\u\g\g\1\g\9\d\c\f\c\l\7\v\y\l\a\7\m\w\9\3\z\z\w\g\v\b\x\s\w\k\a\j\0\8\s\d\2\0\f\t\p\1\c\9\w\3\t\h\1\q\s\x\b\r\0\v\f\0\s\p\9\3\c\9\p\w\i\d\n\b\v\d\r\e\x\1\9\9\5\x\d\i\i\c\r\y\z\v\2\d\k\y ]] 00:09:02.956 00:09:02.956 real 0m4.278s 00:09:02.956 user 0m2.221s 00:09:02.956 sys 0m1.074s 00:09:02.956 07:18:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.956 07:18:24 -- common/autotest_common.sh@10 -- # set +x 00:09:02.956 ************************************ 00:09:02.956 END TEST dd_flags_misc_forced_aio 00:09:02.956 ************************************ 00:09:02.956 07:18:24 -- dd/posix.sh@1 -- # cleanup 00:09:02.956 07:18:24 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:02.956 07:18:25 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:02.956 00:09:02.956 real 0m20.385s 00:09:02.956 user 0m9.735s 00:09:02.956 sys 0m4.835s 00:09:02.956 07:18:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.957 07:18:25 -- common/autotest_common.sh@10 -- # set +x 00:09:02.957 ************************************ 00:09:02.957 END TEST spdk_dd_posix 00:09:02.957 ************************************ 00:09:02.957 07:18:25 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:02.957 07:18:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:02.957 07:18:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.957 07:18:25 -- common/autotest_common.sh@10 -- # set +x 00:09:02.957 ************************************ 00:09:02.957 START TEST spdk_dd_malloc 00:09:02.957 ************************************ 00:09:02.957 07:18:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:02.957 * Looking for test storage... 00:09:02.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:02.957 07:18:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:02.957 07:18:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:02.957 07:18:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:03.216 07:18:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:03.216 07:18:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:03.216 07:18:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:03.216 07:18:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:03.216 07:18:25 -- scripts/common.sh@335 -- # IFS=.-: 00:09:03.216 07:18:25 -- scripts/common.sh@335 -- # read -ra ver1 00:09:03.216 07:18:25 -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.216 07:18:25 -- scripts/common.sh@336 -- # read -ra ver2 00:09:03.216 07:18:25 -- scripts/common.sh@337 -- # local 'op=<' 00:09:03.216 07:18:25 -- scripts/common.sh@339 -- # ver1_l=2 00:09:03.216 07:18:25 -- scripts/common.sh@340 -- # ver2_l=1 00:09:03.216 07:18:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:03.216 07:18:25 -- scripts/common.sh@343 -- # case "$op" in 00:09:03.216 07:18:25 -- scripts/common.sh@344 -- # : 1 00:09:03.216 07:18:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:03.216 07:18:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.216 07:18:25 -- scripts/common.sh@364 -- # decimal 1 00:09:03.216 07:18:25 -- scripts/common.sh@352 -- # local d=1 00:09:03.216 07:18:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.216 07:18:25 -- scripts/common.sh@354 -- # echo 1 00:09:03.216 07:18:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:03.216 07:18:25 -- scripts/common.sh@365 -- # decimal 2 00:09:03.216 07:18:25 -- scripts/common.sh@352 -- # local d=2 00:09:03.216 07:18:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.216 07:18:25 -- scripts/common.sh@354 -- # echo 2 00:09:03.216 07:18:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:03.216 07:18:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:03.216 07:18:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:03.216 07:18:25 -- scripts/common.sh@367 -- # return 0 00:09:03.216 07:18:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.216 07:18:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.216 --rc genhtml_branch_coverage=1 00:09:03.216 --rc genhtml_function_coverage=1 00:09:03.216 --rc genhtml_legend=1 00:09:03.216 --rc geninfo_all_blocks=1 00:09:03.216 --rc geninfo_unexecuted_blocks=1 00:09:03.216 00:09:03.216 ' 00:09:03.216 07:18:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.216 --rc genhtml_branch_coverage=1 00:09:03.216 --rc genhtml_function_coverage=1 00:09:03.216 --rc genhtml_legend=1 00:09:03.216 --rc geninfo_all_blocks=1 00:09:03.216 --rc geninfo_unexecuted_blocks=1 00:09:03.216 00:09:03.216 ' 00:09:03.216 07:18:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.216 --rc genhtml_branch_coverage=1 00:09:03.216 --rc genhtml_function_coverage=1 00:09:03.216 --rc genhtml_legend=1 00:09:03.216 --rc geninfo_all_blocks=1 00:09:03.216 --rc geninfo_unexecuted_blocks=1 00:09:03.216 00:09:03.216 ' 00:09:03.216 07:18:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:03.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.216 --rc genhtml_branch_coverage=1 00:09:03.216 --rc genhtml_function_coverage=1 00:09:03.216 --rc genhtml_legend=1 00:09:03.216 --rc geninfo_all_blocks=1 00:09:03.216 --rc geninfo_unexecuted_blocks=1 00:09:03.216 00:09:03.216 ' 00:09:03.216 07:18:25 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:03.216 07:18:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.216 07:18:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.216 07:18:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.216 07:18:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.216 07:18:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.216 07:18:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.216 07:18:25 -- paths/export.sh@5 -- # export PATH 00:09:03.216 07:18:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.216 07:18:25 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:03.216 07:18:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:03.216 07:18:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.216 07:18:25 -- common/autotest_common.sh@10 -- # set +x 00:09:03.216 ************************************ 00:09:03.216 START TEST dd_malloc_copy 00:09:03.216 ************************************ 00:09:03.216 07:18:25 -- common/autotest_common.sh@1114 -- # malloc_copy 00:09:03.216 07:18:25 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:03.216 07:18:25 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:03.216 07:18:25 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:03.216 07:18:25 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:03.216 07:18:25 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:03.216 07:18:25 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:03.216 07:18:25 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:03.216 07:18:25 -- dd/malloc.sh@28 -- # gen_conf 00:09:03.216 07:18:25 -- dd/common.sh@31 -- # xtrace_disable 00:09:03.216 07:18:25 -- common/autotest_common.sh@10 -- # set +x 00:09:03.216 [2024-11-28 07:18:25.305687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:03.216 [2024-11-28 07:18:25.305790] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70947 ] 00:09:03.216 { 00:09:03.216 "subsystems": [ 00:09:03.216 { 00:09:03.216 "subsystem": "bdev", 00:09:03.216 "config": [ 00:09:03.216 { 00:09:03.216 "params": { 00:09:03.216 "block_size": 512, 00:09:03.216 "num_blocks": 1048576, 00:09:03.216 "name": "malloc0" 00:09:03.216 }, 00:09:03.216 "method": "bdev_malloc_create" 00:09:03.216 }, 00:09:03.216 { 00:09:03.216 "params": { 00:09:03.216 "block_size": 512, 00:09:03.216 "num_blocks": 1048576, 00:09:03.216 "name": "malloc1" 00:09:03.216 }, 00:09:03.216 "method": "bdev_malloc_create" 00:09:03.216 }, 00:09:03.216 { 00:09:03.216 "method": "bdev_wait_for_examine" 00:09:03.216 } 00:09:03.216 ] 00:09:03.216 } 00:09:03.216 ] 00:09:03.216 } 00:09:03.216 [2024-11-28 07:18:25.444722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.475 [2024-11-28 07:18:25.512365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.852  [2024-11-28T07:18:28.063Z] Copying: 225/512 [MB] (225 MBps) [2024-11-28T07:18:28.341Z] Copying: 454/512 [MB] (229 MBps) [2024-11-28T07:18:28.928Z] Copying: 512/512 [MB] (average 227 MBps) 00:09:06.653 00:09:06.653 07:18:28 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:06.653 07:18:28 -- dd/malloc.sh@33 -- # gen_conf 00:09:06.653 07:18:28 -- dd/common.sh@31 -- # xtrace_disable 00:09:06.653 07:18:28 -- common/autotest_common.sh@10 -- # set +x 00:09:06.653 [2024-11-28 07:18:28.784712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:06.653 [2024-11-28 07:18:28.785491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70991 ] 00:09:06.653 { 00:09:06.653 "subsystems": [ 00:09:06.653 { 00:09:06.653 "subsystem": "bdev", 00:09:06.653 "config": [ 00:09:06.653 { 00:09:06.653 "params": { 00:09:06.653 "block_size": 512, 00:09:06.653 "num_blocks": 1048576, 00:09:06.653 "name": "malloc0" 00:09:06.653 }, 00:09:06.653 "method": "bdev_malloc_create" 00:09:06.653 }, 00:09:06.653 { 00:09:06.653 "params": { 00:09:06.653 "block_size": 512, 00:09:06.653 "num_blocks": 1048576, 00:09:06.653 "name": "malloc1" 00:09:06.653 }, 00:09:06.653 "method": "bdev_malloc_create" 00:09:06.653 }, 00:09:06.653 { 00:09:06.653 "method": "bdev_wait_for_examine" 00:09:06.653 } 00:09:06.653 ] 00:09:06.653 } 00:09:06.653 ] 00:09:06.653 } 00:09:06.912 [2024-11-28 07:18:28.928278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.912 [2024-11-28 07:18:29.000828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.289  [2024-11-28T07:18:31.508Z] Copying: 222/512 [MB] (222 MBps) [2024-11-28T07:18:31.765Z] Copying: 447/512 [MB] (225 MBps) [2024-11-28T07:18:32.332Z] Copying: 512/512 [MB] (average 223 MBps) 00:09:10.057 00:09:10.057 00:09:10.057 real 0m7.014s 00:09:10.057 user 0m6.007s 00:09:10.057 sys 0m0.837s 00:09:10.057 07:18:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.057 07:18:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.057 ************************************ 00:09:10.057 END TEST dd_malloc_copy 00:09:10.057 ************************************ 00:09:10.057 00:09:10.057 real 0m7.258s 00:09:10.057 user 0m6.143s 00:09:10.057 sys 0m0.947s 00:09:10.057 07:18:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.057 07:18:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.057 ************************************ 00:09:10.057 END TEST spdk_dd_malloc 00:09:10.057 ************************************ 00:09:10.317 07:18:32 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:09:10.317 07:18:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:10.317 07:18:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.317 07:18:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.317 ************************************ 00:09:10.317 START TEST spdk_dd_bdev_to_bdev 00:09:10.317 ************************************ 00:09:10.317 07:18:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:09:10.317 * Looking for test storage... 00:09:10.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:10.317 07:18:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:10.317 07:18:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:10.317 07:18:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:10.317 07:18:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:10.317 07:18:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:10.317 07:18:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:10.317 07:18:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:10.317 07:18:32 -- scripts/common.sh@335 -- # IFS=.-: 00:09:10.317 07:18:32 -- scripts/common.sh@335 -- # read -ra ver1 00:09:10.317 07:18:32 -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.317 07:18:32 -- scripts/common.sh@336 -- # read -ra ver2 00:09:10.317 07:18:32 -- scripts/common.sh@337 -- # local 'op=<' 00:09:10.317 07:18:32 -- scripts/common.sh@339 -- # ver1_l=2 00:09:10.317 07:18:32 -- scripts/common.sh@340 -- # ver2_l=1 00:09:10.317 07:18:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:10.317 07:18:32 -- scripts/common.sh@343 -- # case "$op" in 00:09:10.317 07:18:32 -- scripts/common.sh@344 -- # : 1 00:09:10.317 07:18:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:10.317 07:18:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.317 07:18:32 -- scripts/common.sh@364 -- # decimal 1 00:09:10.317 07:18:32 -- scripts/common.sh@352 -- # local d=1 00:09:10.317 07:18:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.317 07:18:32 -- scripts/common.sh@354 -- # echo 1 00:09:10.317 07:18:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:10.317 07:18:32 -- scripts/common.sh@365 -- # decimal 2 00:09:10.317 07:18:32 -- scripts/common.sh@352 -- # local d=2 00:09:10.317 07:18:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.317 07:18:32 -- scripts/common.sh@354 -- # echo 2 00:09:10.317 07:18:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:10.317 07:18:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:10.317 07:18:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:10.317 07:18:32 -- scripts/common.sh@367 -- # return 0 00:09:10.317 07:18:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.317 07:18:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:10.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.317 --rc genhtml_branch_coverage=1 00:09:10.317 --rc genhtml_function_coverage=1 00:09:10.317 --rc genhtml_legend=1 00:09:10.317 --rc geninfo_all_blocks=1 00:09:10.317 --rc geninfo_unexecuted_blocks=1 00:09:10.317 00:09:10.317 ' 00:09:10.317 07:18:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:10.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.317 --rc genhtml_branch_coverage=1 00:09:10.317 --rc genhtml_function_coverage=1 00:09:10.317 --rc genhtml_legend=1 00:09:10.317 --rc geninfo_all_blocks=1 00:09:10.317 --rc geninfo_unexecuted_blocks=1 00:09:10.317 00:09:10.317 ' 00:09:10.317 07:18:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:10.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.317 --rc genhtml_branch_coverage=1 00:09:10.317 --rc genhtml_function_coverage=1 00:09:10.317 --rc genhtml_legend=1 00:09:10.317 --rc geninfo_all_blocks=1 00:09:10.317 --rc geninfo_unexecuted_blocks=1 00:09:10.317 00:09:10.317 ' 00:09:10.317 07:18:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:10.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.317 --rc genhtml_branch_coverage=1 00:09:10.317 --rc genhtml_function_coverage=1 00:09:10.317 --rc genhtml_legend=1 00:09:10.317 --rc geninfo_all_blocks=1 00:09:10.317 --rc geninfo_unexecuted_blocks=1 00:09:10.317 00:09:10.317 ' 00:09:10.317 07:18:32 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:10.317 07:18:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.317 07:18:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.317 07:18:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.317 07:18:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.317 07:18:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.317 07:18:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.317 07:18:32 -- paths/export.sh@5 -- # export PATH 00:09:10.317 07:18:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:10.317 07:18:32 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:10.317 07:18:32 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:10.317 07:18:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.317 07:18:32 -- common/autotest_common.sh@10 -- # set +x 00:09:10.317 ************************************ 00:09:10.317 START TEST dd_inflate_file 00:09:10.317 ************************************ 00:09:10.317 07:18:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:10.576 [2024-11-28 07:18:32.603698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:10.576 [2024-11-28 07:18:32.604343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71103 ] 00:09:10.576 [2024-11-28 07:18:32.745621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.576 [2024-11-28 07:18:32.816060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.835  [2024-11-28T07:18:33.369Z] Copying: 64/64 [MB] (average 1560 MBps) 00:09:11.094 00:09:11.094 00:09:11.094 real 0m0.606s 00:09:11.094 user 0m0.284s 00:09:11.094 sys 0m0.191s 00:09:11.094 07:18:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.094 07:18:33 -- common/autotest_common.sh@10 -- # set +x 00:09:11.094 ************************************ 00:09:11.094 END TEST dd_inflate_file 00:09:11.094 ************************************ 00:09:11.094 07:18:33 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:11.094 07:18:33 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:11.094 07:18:33 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:11.094 07:18:33 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:11.094 07:18:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:11.094 07:18:33 -- dd/common.sh@31 -- # xtrace_disable 00:09:11.094 07:18:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.094 07:18:33 -- common/autotest_common.sh@10 -- # set +x 00:09:11.094 07:18:33 -- common/autotest_common.sh@10 -- # set +x 00:09:11.094 ************************************ 00:09:11.094 START TEST dd_copy_to_out_bdev 00:09:11.094 ************************************ 00:09:11.094 07:18:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:11.094 [2024-11-28 07:18:33.257775] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:11.094 [2024-11-28 07:18:33.257879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71139 ] 00:09:11.094 { 00:09:11.094 "subsystems": [ 00:09:11.094 { 00:09:11.094 "subsystem": "bdev", 00:09:11.094 "config": [ 00:09:11.094 { 00:09:11.094 "params": { 00:09:11.094 "trtype": "pcie", 00:09:11.094 "traddr": "0000:00:06.0", 00:09:11.094 "name": "Nvme0" 00:09:11.094 }, 00:09:11.094 "method": "bdev_nvme_attach_controller" 00:09:11.094 }, 00:09:11.094 { 00:09:11.094 "params": { 00:09:11.094 "trtype": "pcie", 00:09:11.094 "traddr": "0000:00:07.0", 00:09:11.094 "name": "Nvme1" 00:09:11.094 }, 00:09:11.094 "method": "bdev_nvme_attach_controller" 00:09:11.094 }, 00:09:11.094 { 00:09:11.094 "method": "bdev_wait_for_examine" 00:09:11.094 } 00:09:11.094 ] 00:09:11.094 } 00:09:11.094 ] 00:09:11.094 } 00:09:11.353 [2024-11-28 07:18:33.382160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.353 [2024-11-28 07:18:33.438027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.730  [2024-11-28T07:18:35.005Z] Copying: 48/64 [MB] (48 MBps) [2024-11-28T07:18:35.263Z] Copying: 64/64 [MB] (average 48 MBps) 00:09:12.988 00:09:12.988 00:09:12.988 real 0m1.997s 00:09:12.988 user 0m1.728s 00:09:12.988 sys 0m0.204s 00:09:12.988 07:18:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:12.988 07:18:35 -- common/autotest_common.sh@10 -- # set +x 00:09:12.988 ************************************ 00:09:12.988 END TEST dd_copy_to_out_bdev 00:09:12.988 ************************************ 00:09:13.246 07:18:35 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:13.246 07:18:35 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:13.246 07:18:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:13.246 07:18:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.246 07:18:35 -- common/autotest_common.sh@10 -- # set +x 00:09:13.246 ************************************ 00:09:13.246 START TEST dd_offset_magic 00:09:13.246 ************************************ 00:09:13.246 07:18:35 -- common/autotest_common.sh@1114 -- # offset_magic 00:09:13.246 07:18:35 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:13.246 07:18:35 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:13.246 07:18:35 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:13.246 07:18:35 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:13.246 07:18:35 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:13.246 07:18:35 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:13.246 07:18:35 -- dd/common.sh@31 -- # xtrace_disable 00:09:13.246 07:18:35 -- common/autotest_common.sh@10 -- # set +x 00:09:13.246 [2024-11-28 07:18:35.321193] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:13.246 [2024-11-28 07:18:35.321284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71184 ] 00:09:13.246 { 00:09:13.246 "subsystems": [ 00:09:13.246 { 00:09:13.246 "subsystem": "bdev", 00:09:13.247 "config": [ 00:09:13.247 { 00:09:13.247 "params": { 00:09:13.247 "trtype": "pcie", 00:09:13.247 "traddr": "0000:00:06.0", 00:09:13.247 "name": "Nvme0" 00:09:13.247 }, 00:09:13.247 "method": "bdev_nvme_attach_controller" 00:09:13.247 }, 00:09:13.247 { 00:09:13.247 "params": { 00:09:13.247 "trtype": "pcie", 00:09:13.247 "traddr": "0000:00:07.0", 00:09:13.247 "name": "Nvme1" 00:09:13.247 }, 00:09:13.247 "method": "bdev_nvme_attach_controller" 00:09:13.247 }, 00:09:13.247 { 00:09:13.247 "method": "bdev_wait_for_examine" 00:09:13.247 } 00:09:13.247 ] 00:09:13.247 } 00:09:13.247 ] 00:09:13.247 } 00:09:13.247 [2024-11-28 07:18:35.460858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.505 [2024-11-28 07:18:35.526356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.764  [2024-11-28T07:18:36.297Z] Copying: 65/65 [MB] (average 783 MBps) 00:09:14.022 00:09:14.023 07:18:36 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:14.023 07:18:36 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:14.023 07:18:36 -- dd/common.sh@31 -- # xtrace_disable 00:09:14.023 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:09:14.023 [2024-11-28 07:18:36.112496] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:14.023 [2024-11-28 07:18:36.113054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71198 ] 00:09:14.023 { 00:09:14.023 "subsystems": [ 00:09:14.023 { 00:09:14.023 "subsystem": "bdev", 00:09:14.023 "config": [ 00:09:14.023 { 00:09:14.023 "params": { 00:09:14.023 "trtype": "pcie", 00:09:14.023 "traddr": "0000:00:06.0", 00:09:14.023 "name": "Nvme0" 00:09:14.023 }, 00:09:14.023 "method": "bdev_nvme_attach_controller" 00:09:14.023 }, 00:09:14.023 { 00:09:14.023 "params": { 00:09:14.023 "trtype": "pcie", 00:09:14.023 "traddr": "0000:00:07.0", 00:09:14.023 "name": "Nvme1" 00:09:14.023 }, 00:09:14.023 "method": "bdev_nvme_attach_controller" 00:09:14.023 }, 00:09:14.023 { 00:09:14.023 "method": "bdev_wait_for_examine" 00:09:14.023 } 00:09:14.023 ] 00:09:14.023 } 00:09:14.023 ] 00:09:14.023 } 00:09:14.023 [2024-11-28 07:18:36.251097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.281 [2024-11-28 07:18:36.301034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.281  [2024-11-28T07:18:36.815Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:14.540 00:09:14.540 07:18:36 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:14.540 07:18:36 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:14.540 07:18:36 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:14.540 07:18:36 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:14.540 07:18:36 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:14.540 07:18:36 -- dd/common.sh@31 -- # xtrace_disable 00:09:14.540 07:18:36 -- common/autotest_common.sh@10 -- # set +x 00:09:14.540 [2024-11-28 07:18:36.765402] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:14.540 [2024-11-28 07:18:36.765995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71213 ] 00:09:14.540 { 00:09:14.540 "subsystems": [ 00:09:14.540 { 00:09:14.540 "subsystem": "bdev", 00:09:14.540 "config": [ 00:09:14.540 { 00:09:14.540 "params": { 00:09:14.540 "trtype": "pcie", 00:09:14.540 "traddr": "0000:00:06.0", 00:09:14.540 "name": "Nvme0" 00:09:14.540 }, 00:09:14.540 "method": "bdev_nvme_attach_controller" 00:09:14.540 }, 00:09:14.540 { 00:09:14.540 "params": { 00:09:14.540 "trtype": "pcie", 00:09:14.540 "traddr": "0000:00:07.0", 00:09:14.540 "name": "Nvme1" 00:09:14.540 }, 00:09:14.540 "method": "bdev_nvme_attach_controller" 00:09:14.540 }, 00:09:14.540 { 00:09:14.540 "method": "bdev_wait_for_examine" 00:09:14.540 } 00:09:14.540 ] 00:09:14.540 } 00:09:14.540 ] 00:09:14.540 } 00:09:14.799 [2024-11-28 07:18:36.900964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.799 [2024-11-28 07:18:36.952129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.058  [2024-11-28T07:18:37.592Z] Copying: 65/65 [MB] (average 855 MBps) 00:09:15.317 00:09:15.317 07:18:37 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:15.317 07:18:37 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:15.317 07:18:37 -- dd/common.sh@31 -- # xtrace_disable 00:09:15.317 07:18:37 -- common/autotest_common.sh@10 -- # set +x 00:09:15.317 [2024-11-28 07:18:37.506648] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:15.317 [2024-11-28 07:18:37.506741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71233 ] 00:09:15.317 { 00:09:15.318 "subsystems": [ 00:09:15.318 { 00:09:15.318 "subsystem": "bdev", 00:09:15.318 "config": [ 00:09:15.318 { 00:09:15.318 "params": { 00:09:15.318 "trtype": "pcie", 00:09:15.318 "traddr": "0000:00:06.0", 00:09:15.318 "name": "Nvme0" 00:09:15.318 }, 00:09:15.318 "method": "bdev_nvme_attach_controller" 00:09:15.318 }, 00:09:15.318 { 00:09:15.318 "params": { 00:09:15.318 "trtype": "pcie", 00:09:15.318 "traddr": "0000:00:07.0", 00:09:15.318 "name": "Nvme1" 00:09:15.318 }, 00:09:15.318 "method": "bdev_nvme_attach_controller" 00:09:15.318 }, 00:09:15.318 { 00:09:15.318 "method": "bdev_wait_for_examine" 00:09:15.318 } 00:09:15.318 ] 00:09:15.318 } 00:09:15.318 ] 00:09:15.318 } 00:09:15.576 [2024-11-28 07:18:37.644918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.576 [2024-11-28 07:18:37.708342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.835  [2024-11-28T07:18:38.369Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:16.094 00:09:16.094 ************************************ 00:09:16.094 END TEST dd_offset_magic 00:09:16.094 ************************************ 00:09:16.094 07:18:38 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:16.094 07:18:38 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:16.094 00:09:16.094 real 0m2.867s 00:09:16.094 user 0m1.984s 00:09:16.094 sys 0m0.662s 00:09:16.094 07:18:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:16.094 07:18:38 -- common/autotest_common.sh@10 -- # set +x 00:09:16.094 07:18:38 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:16.094 07:18:38 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:16.094 07:18:38 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:16.094 07:18:38 -- dd/common.sh@11 -- # local nvme_ref= 00:09:16.094 07:18:38 -- dd/common.sh@12 -- # local size=4194330 00:09:16.094 07:18:38 -- dd/common.sh@14 -- # local bs=1048576 00:09:16.094 07:18:38 -- dd/common.sh@15 -- # local count=5 00:09:16.094 07:18:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:16.094 07:18:38 -- dd/common.sh@18 -- # gen_conf 00:09:16.094 07:18:38 -- dd/common.sh@31 -- # xtrace_disable 00:09:16.094 07:18:38 -- common/autotest_common.sh@10 -- # set +x 00:09:16.094 [2024-11-28 07:18:38.226920] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:16.094 [2024-11-28 07:18:38.227030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71268 ] 00:09:16.094 { 00:09:16.094 "subsystems": [ 00:09:16.094 { 00:09:16.094 "subsystem": "bdev", 00:09:16.094 "config": [ 00:09:16.094 { 00:09:16.094 "params": { 00:09:16.094 "trtype": "pcie", 00:09:16.094 "traddr": "0000:00:06.0", 00:09:16.094 "name": "Nvme0" 00:09:16.094 }, 00:09:16.094 "method": "bdev_nvme_attach_controller" 00:09:16.094 }, 00:09:16.094 { 00:09:16.094 "params": { 00:09:16.094 "trtype": "pcie", 00:09:16.094 "traddr": "0000:00:07.0", 00:09:16.094 "name": "Nvme1" 00:09:16.094 }, 00:09:16.094 "method": "bdev_nvme_attach_controller" 00:09:16.094 }, 00:09:16.094 { 00:09:16.094 "method": "bdev_wait_for_examine" 00:09:16.094 } 00:09:16.094 ] 00:09:16.094 } 00:09:16.094 ] 00:09:16.094 } 00:09:16.094 [2024-11-28 07:18:38.364945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.353 [2024-11-28 07:18:38.416966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.353  [2024-11-28T07:18:38.887Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:09:16.612 00:09:16.612 07:18:38 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:16.612 07:18:38 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:16.612 07:18:38 -- dd/common.sh@11 -- # local nvme_ref= 00:09:16.612 07:18:38 -- dd/common.sh@12 -- # local size=4194330 00:09:16.612 07:18:38 -- dd/common.sh@14 -- # local bs=1048576 00:09:16.612 07:18:38 -- dd/common.sh@15 -- # local count=5 00:09:16.612 07:18:38 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:16.612 07:18:38 -- dd/common.sh@18 -- # gen_conf 00:09:16.612 07:18:38 -- dd/common.sh@31 -- # xtrace_disable 00:09:16.612 07:18:38 -- common/autotest_common.sh@10 -- # set +x 00:09:16.871 [2024-11-28 07:18:38.889265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:16.871 [2024-11-28 07:18:38.889376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71277 ] 00:09:16.871 { 00:09:16.871 "subsystems": [ 00:09:16.871 { 00:09:16.871 "subsystem": "bdev", 00:09:16.871 "config": [ 00:09:16.871 { 00:09:16.871 "params": { 00:09:16.871 "trtype": "pcie", 00:09:16.871 "traddr": "0000:00:06.0", 00:09:16.871 "name": "Nvme0" 00:09:16.871 }, 00:09:16.871 "method": "bdev_nvme_attach_controller" 00:09:16.871 }, 00:09:16.871 { 00:09:16.871 "params": { 00:09:16.871 "trtype": "pcie", 00:09:16.871 "traddr": "0000:00:07.0", 00:09:16.871 "name": "Nvme1" 00:09:16.871 }, 00:09:16.871 "method": "bdev_nvme_attach_controller" 00:09:16.871 }, 00:09:16.871 { 00:09:16.871 "method": "bdev_wait_for_examine" 00:09:16.871 } 00:09:16.871 ] 00:09:16.871 } 00:09:16.871 ] 00:09:16.871 } 00:09:16.871 [2024-11-28 07:18:39.029064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.871 [2024-11-28 07:18:39.107956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.130  [2024-11-28T07:18:39.664Z] Copying: 5120/5120 [kB] (average 714 MBps) 00:09:17.389 00:09:17.389 07:18:39 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:17.389 00:09:17.389 real 0m7.193s 00:09:17.389 user 0m5.076s 00:09:17.389 sys 0m1.588s 00:09:17.389 07:18:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:17.389 ************************************ 00:09:17.389 07:18:39 -- common/autotest_common.sh@10 -- # set +x 00:09:17.389 END TEST spdk_dd_bdev_to_bdev 00:09:17.389 ************************************ 00:09:17.389 07:18:39 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:17.389 07:18:39 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:17.389 07:18:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:17.389 07:18:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.389 07:18:39 -- common/autotest_common.sh@10 -- # set +x 00:09:17.389 ************************************ 00:09:17.389 START TEST spdk_dd_uring 00:09:17.389 ************************************ 00:09:17.389 07:18:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:17.648 * Looking for test storage... 00:09:17.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:17.648 07:18:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:17.648 07:18:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:17.648 07:18:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:17.648 07:18:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:17.648 07:18:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:17.648 07:18:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:17.648 07:18:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:17.648 07:18:39 -- scripts/common.sh@335 -- # IFS=.-: 00:09:17.648 07:18:39 -- scripts/common.sh@335 -- # read -ra ver1 00:09:17.648 07:18:39 -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.648 07:18:39 -- scripts/common.sh@336 -- # read -ra ver2 00:09:17.648 07:18:39 -- scripts/common.sh@337 -- # local 'op=<' 00:09:17.648 07:18:39 -- scripts/common.sh@339 -- # ver1_l=2 00:09:17.648 07:18:39 -- scripts/common.sh@340 -- # ver2_l=1 00:09:17.648 07:18:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:17.648 07:18:39 -- scripts/common.sh@343 -- # case "$op" in 00:09:17.648 07:18:39 -- scripts/common.sh@344 -- # : 1 00:09:17.648 07:18:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:17.648 07:18:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.648 07:18:39 -- scripts/common.sh@364 -- # decimal 1 00:09:17.648 07:18:39 -- scripts/common.sh@352 -- # local d=1 00:09:17.648 07:18:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.648 07:18:39 -- scripts/common.sh@354 -- # echo 1 00:09:17.648 07:18:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:17.648 07:18:39 -- scripts/common.sh@365 -- # decimal 2 00:09:17.648 07:18:39 -- scripts/common.sh@352 -- # local d=2 00:09:17.648 07:18:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.648 07:18:39 -- scripts/common.sh@354 -- # echo 2 00:09:17.648 07:18:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:17.648 07:18:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:17.648 07:18:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:17.648 07:18:39 -- scripts/common.sh@367 -- # return 0 00:09:17.648 07:18:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.648 07:18:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.648 --rc genhtml_branch_coverage=1 00:09:17.648 --rc genhtml_function_coverage=1 00:09:17.648 --rc genhtml_legend=1 00:09:17.648 --rc geninfo_all_blocks=1 00:09:17.648 --rc geninfo_unexecuted_blocks=1 00:09:17.648 00:09:17.648 ' 00:09:17.648 07:18:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.648 --rc genhtml_branch_coverage=1 00:09:17.648 --rc genhtml_function_coverage=1 00:09:17.648 --rc genhtml_legend=1 00:09:17.648 --rc geninfo_all_blocks=1 00:09:17.648 --rc geninfo_unexecuted_blocks=1 00:09:17.648 00:09:17.648 ' 00:09:17.648 07:18:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.648 --rc genhtml_branch_coverage=1 00:09:17.648 --rc genhtml_function_coverage=1 00:09:17.648 --rc genhtml_legend=1 00:09:17.648 --rc geninfo_all_blocks=1 00:09:17.648 --rc geninfo_unexecuted_blocks=1 00:09:17.648 00:09:17.648 ' 00:09:17.648 07:18:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.648 --rc genhtml_branch_coverage=1 00:09:17.648 --rc genhtml_function_coverage=1 00:09:17.648 --rc genhtml_legend=1 00:09:17.648 --rc geninfo_all_blocks=1 00:09:17.648 --rc geninfo_unexecuted_blocks=1 00:09:17.648 00:09:17.648 ' 00:09:17.648 07:18:39 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:17.648 07:18:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.648 07:18:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.648 07:18:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.648 07:18:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.648 07:18:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.648 07:18:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.648 07:18:39 -- paths/export.sh@5 -- # export PATH 00:09:17.648 07:18:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.648 07:18:39 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:17.648 07:18:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:17.648 07:18:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:17.648 07:18:39 -- common/autotest_common.sh@10 -- # set +x 00:09:17.648 ************************************ 00:09:17.648 START TEST dd_uring_copy 00:09:17.648 ************************************ 00:09:17.648 07:18:39 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:09:17.648 07:18:39 -- dd/uring.sh@15 -- # local zram_dev_id 00:09:17.648 07:18:39 -- dd/uring.sh@16 -- # local magic 00:09:17.648 07:18:39 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:17.648 07:18:39 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:17.648 07:18:39 -- dd/uring.sh@19 -- # local verify_magic 00:09:17.648 07:18:39 -- dd/uring.sh@21 -- # init_zram 00:09:17.648 07:18:39 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:09:17.648 07:18:39 -- dd/common.sh@164 -- # return 00:09:17.648 07:18:39 -- dd/uring.sh@22 -- # create_zram_dev 00:09:17.648 07:18:39 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:09:17.648 07:18:39 -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:17.648 07:18:39 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:17.648 07:18:39 -- dd/common.sh@181 -- # local id=1 00:09:17.648 07:18:39 -- dd/common.sh@182 -- # local size=512M 00:09:17.648 07:18:39 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:09:17.649 07:18:39 -- dd/common.sh@186 -- # echo 512M 00:09:17.649 07:18:39 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:17.649 07:18:39 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:17.649 07:18:39 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:17.649 07:18:39 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:17.649 07:18:39 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:17.649 07:18:39 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:17.649 07:18:39 -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:17.649 07:18:39 -- dd/common.sh@98 -- # xtrace_disable 00:09:17.649 07:18:39 -- common/autotest_common.sh@10 -- # set +x 00:09:17.649 07:18:39 -- dd/uring.sh@41 -- # magic=o095qcxo0onmnmt31d83etrl1dgi5vie3xg40vmpxj9mrrgni8zobif8vhx3f2ttzai1mlgnwuw04dn6i0npyd1bk1mfsm0sbzva7m3tfklmdbmmlr5babs4f6qwa2mw581y9h72u6w0nygcbdb5ohkvlxp2w0jdkjkosiae8sk72mp4p5nqsj1zkm3xdv0ueoe9jrg5iiibc4lb66863gy8qct5itejk08gx3xnumh8oz17v2ztjnbnbpbeq7r3xbtcdt8ov017yqzhfofzzeqdq0ambnd9giopatk74q5dswnsy6t4e71q4wijv54tx7zuprfs43zflx9gztofi9of7vxuvt02go5oot9gfw9ry3doy7thtb61ge88kh8xcsu8znwoxsjhy4efyr9ps1keaa8da62pd3ui0g1qskmb9pw2m5nketr2v6fqa8rfsgfmwk0h6eqdd14i9tbf3gzdp6wwp6ynevc69h74pz3mkg9zp5vcak764w580c053us7d7yh910ri6hktxhltcca5u3gr78dqp684tpjc1ke68jm6ir810b5omje16bjgm2kll3uk7zq9a8ph4m083xnwdfnk3ma69rdsrdpnhb46jynetydm1md1gm2xgo5dhw6jlz7sgbtz1jmm4q9wiglkclibjwfl2i9ipd959eq5gl7jjg2nv4l6zgj86a6w5z3k16smp14sm5ep27dyjwjtoq7krzbjouk515vcm4ce1eluqj78of0dbrsvzwaig8c52zwo4gtie4368o8ejrz9966ta6k8vee569sfzu4b0qoaheo12sb770n34no5xfevm7377rcl67j6sl3op48m9tzslf3ndyjwsenmuel9lc846pbd86e9un5ppmnolrqaddw3hdsran2kx4mm0p0t55v1ksh3rh909x31drhq6ldlmcd5fbbsqo1swu9d72l4um61ppllecy5o0jdlozwbji60bf50kdayrrm4kki60wa5nm091avk3qcyf4 00:09:17.649 07:18:39 -- dd/uring.sh@42 -- # echo o095qcxo0onmnmt31d83etrl1dgi5vie3xg40vmpxj9mrrgni8zobif8vhx3f2ttzai1mlgnwuw04dn6i0npyd1bk1mfsm0sbzva7m3tfklmdbmmlr5babs4f6qwa2mw581y9h72u6w0nygcbdb5ohkvlxp2w0jdkjkosiae8sk72mp4p5nqsj1zkm3xdv0ueoe9jrg5iiibc4lb66863gy8qct5itejk08gx3xnumh8oz17v2ztjnbnbpbeq7r3xbtcdt8ov017yqzhfofzzeqdq0ambnd9giopatk74q5dswnsy6t4e71q4wijv54tx7zuprfs43zflx9gztofi9of7vxuvt02go5oot9gfw9ry3doy7thtb61ge88kh8xcsu8znwoxsjhy4efyr9ps1keaa8da62pd3ui0g1qskmb9pw2m5nketr2v6fqa8rfsgfmwk0h6eqdd14i9tbf3gzdp6wwp6ynevc69h74pz3mkg9zp5vcak764w580c053us7d7yh910ri6hktxhltcca5u3gr78dqp684tpjc1ke68jm6ir810b5omje16bjgm2kll3uk7zq9a8ph4m083xnwdfnk3ma69rdsrdpnhb46jynetydm1md1gm2xgo5dhw6jlz7sgbtz1jmm4q9wiglkclibjwfl2i9ipd959eq5gl7jjg2nv4l6zgj86a6w5z3k16smp14sm5ep27dyjwjtoq7krzbjouk515vcm4ce1eluqj78of0dbrsvzwaig8c52zwo4gtie4368o8ejrz9966ta6k8vee569sfzu4b0qoaheo12sb770n34no5xfevm7377rcl67j6sl3op48m9tzslf3ndyjwsenmuel9lc846pbd86e9un5ppmnolrqaddw3hdsran2kx4mm0p0t55v1ksh3rh909x31drhq6ldlmcd5fbbsqo1swu9d72l4um61ppllecy5o0jdlozwbji60bf50kdayrrm4kki60wa5nm091avk3qcyf4 00:09:17.649 07:18:39 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:17.649 [2024-11-28 07:18:39.904080] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:17.649 [2024-11-28 07:18:39.904174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71353 ] 00:09:17.907 [2024-11-28 07:18:40.042354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.907 [2024-11-28 07:18:40.117366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.472  [2024-11-28T07:18:41.314Z] Copying: 511/511 [MB] (average 1426 MBps) 00:09:19.039 00:09:19.039 07:18:41 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:19.039 07:18:41 -- dd/uring.sh@54 -- # gen_conf 00:09:19.039 07:18:41 -- dd/common.sh@31 -- # xtrace_disable 00:09:19.039 07:18:41 -- common/autotest_common.sh@10 -- # set +x 00:09:19.039 [2024-11-28 07:18:41.163093] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:19.039 [2024-11-28 07:18:41.163199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71367 ] 00:09:19.039 { 00:09:19.039 "subsystems": [ 00:09:19.039 { 00:09:19.039 "subsystem": "bdev", 00:09:19.039 "config": [ 00:09:19.039 { 00:09:19.039 "params": { 00:09:19.039 "block_size": 512, 00:09:19.039 "num_blocks": 1048576, 00:09:19.039 "name": "malloc0" 00:09:19.039 }, 00:09:19.039 "method": "bdev_malloc_create" 00:09:19.039 }, 00:09:19.039 { 00:09:19.039 "params": { 00:09:19.039 "filename": "/dev/zram1", 00:09:19.039 "name": "uring0" 00:09:19.039 }, 00:09:19.039 "method": "bdev_uring_create" 00:09:19.039 }, 00:09:19.039 { 00:09:19.039 "method": "bdev_wait_for_examine" 00:09:19.039 } 00:09:19.039 ] 00:09:19.039 } 00:09:19.039 ] 00:09:19.039 } 00:09:19.039 [2024-11-28 07:18:41.294590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.310 [2024-11-28 07:18:41.366021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.718  [2024-11-28T07:18:43.929Z] Copying: 210/512 [MB] (210 MBps) [2024-11-28T07:18:44.188Z] Copying: 409/512 [MB] (198 MBps) [2024-11-28T07:18:44.755Z] Copying: 512/512 [MB] (average 204 MBps) 00:09:22.480 00:09:22.480 07:18:44 -- dd/uring.sh@60 -- # gen_conf 00:09:22.480 07:18:44 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:22.480 07:18:44 -- dd/common.sh@31 -- # xtrace_disable 00:09:22.481 07:18:44 -- common/autotest_common.sh@10 -- # set +x 00:09:22.481 [2024-11-28 07:18:44.581358] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:22.481 [2024-11-28 07:18:44.581474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71426 ] 00:09:22.481 { 00:09:22.481 "subsystems": [ 00:09:22.481 { 00:09:22.481 "subsystem": "bdev", 00:09:22.481 "config": [ 00:09:22.481 { 00:09:22.481 "params": { 00:09:22.481 "block_size": 512, 00:09:22.481 "num_blocks": 1048576, 00:09:22.481 "name": "malloc0" 00:09:22.481 }, 00:09:22.481 "method": "bdev_malloc_create" 00:09:22.481 }, 00:09:22.481 { 00:09:22.481 "params": { 00:09:22.481 "filename": "/dev/zram1", 00:09:22.481 "name": "uring0" 00:09:22.481 }, 00:09:22.481 "method": "bdev_uring_create" 00:09:22.481 }, 00:09:22.481 { 00:09:22.481 "method": "bdev_wait_for_examine" 00:09:22.481 } 00:09:22.481 ] 00:09:22.481 } 00:09:22.481 ] 00:09:22.481 } 00:09:22.481 [2024-11-28 07:18:44.713465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.739 [2024-11-28 07:18:44.801867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.130  [2024-11-28T07:18:47.342Z] Copying: 142/512 [MB] (142 MBps) [2024-11-28T07:18:48.278Z] Copying: 285/512 [MB] (142 MBps) [2024-11-28T07:18:48.846Z] Copying: 417/512 [MB] (131 MBps) [2024-11-28T07:18:49.105Z] Copying: 512/512 [MB] (average 139 MBps) 00:09:26.830 00:09:26.830 07:18:49 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:26.830 07:18:49 -- dd/uring.sh@66 -- # [[ o095qcxo0onmnmt31d83etrl1dgi5vie3xg40vmpxj9mrrgni8zobif8vhx3f2ttzai1mlgnwuw04dn6i0npyd1bk1mfsm0sbzva7m3tfklmdbmmlr5babs4f6qwa2mw581y9h72u6w0nygcbdb5ohkvlxp2w0jdkjkosiae8sk72mp4p5nqsj1zkm3xdv0ueoe9jrg5iiibc4lb66863gy8qct5itejk08gx3xnumh8oz17v2ztjnbnbpbeq7r3xbtcdt8ov017yqzhfofzzeqdq0ambnd9giopatk74q5dswnsy6t4e71q4wijv54tx7zuprfs43zflx9gztofi9of7vxuvt02go5oot9gfw9ry3doy7thtb61ge88kh8xcsu8znwoxsjhy4efyr9ps1keaa8da62pd3ui0g1qskmb9pw2m5nketr2v6fqa8rfsgfmwk0h6eqdd14i9tbf3gzdp6wwp6ynevc69h74pz3mkg9zp5vcak764w580c053us7d7yh910ri6hktxhltcca5u3gr78dqp684tpjc1ke68jm6ir810b5omje16bjgm2kll3uk7zq9a8ph4m083xnwdfnk3ma69rdsrdpnhb46jynetydm1md1gm2xgo5dhw6jlz7sgbtz1jmm4q9wiglkclibjwfl2i9ipd959eq5gl7jjg2nv4l6zgj86a6w5z3k16smp14sm5ep27dyjwjtoq7krzbjouk515vcm4ce1eluqj78of0dbrsvzwaig8c52zwo4gtie4368o8ejrz9966ta6k8vee569sfzu4b0qoaheo12sb770n34no5xfevm7377rcl67j6sl3op48m9tzslf3ndyjwsenmuel9lc846pbd86e9un5ppmnolrqaddw3hdsran2kx4mm0p0t55v1ksh3rh909x31drhq6ldlmcd5fbbsqo1swu9d72l4um61ppllecy5o0jdlozwbji60bf50kdayrrm4kki60wa5nm091avk3qcyf4 == \o\0\9\5\q\c\x\o\0\o\n\m\n\m\t\3\1\d\8\3\e\t\r\l\1\d\g\i\5\v\i\e\3\x\g\4\0\v\m\p\x\j\9\m\r\r\g\n\i\8\z\o\b\i\f\8\v\h\x\3\f\2\t\t\z\a\i\1\m\l\g\n\w\u\w\0\4\d\n\6\i\0\n\p\y\d\1\b\k\1\m\f\s\m\0\s\b\z\v\a\7\m\3\t\f\k\l\m\d\b\m\m\l\r\5\b\a\b\s\4\f\6\q\w\a\2\m\w\5\8\1\y\9\h\7\2\u\6\w\0\n\y\g\c\b\d\b\5\o\h\k\v\l\x\p\2\w\0\j\d\k\j\k\o\s\i\a\e\8\s\k\7\2\m\p\4\p\5\n\q\s\j\1\z\k\m\3\x\d\v\0\u\e\o\e\9\j\r\g\5\i\i\i\b\c\4\l\b\6\6\8\6\3\g\y\8\q\c\t\5\i\t\e\j\k\0\8\g\x\3\x\n\u\m\h\8\o\z\1\7\v\2\z\t\j\n\b\n\b\p\b\e\q\7\r\3\x\b\t\c\d\t\8\o\v\0\1\7\y\q\z\h\f\o\f\z\z\e\q\d\q\0\a\m\b\n\d\9\g\i\o\p\a\t\k\7\4\q\5\d\s\w\n\s\y\6\t\4\e\7\1\q\4\w\i\j\v\5\4\t\x\7\z\u\p\r\f\s\4\3\z\f\l\x\9\g\z\t\o\f\i\9\o\f\7\v\x\u\v\t\0\2\g\o\5\o\o\t\9\g\f\w\9\r\y\3\d\o\y\7\t\h\t\b\6\1\g\e\8\8\k\h\8\x\c\s\u\8\z\n\w\o\x\s\j\h\y\4\e\f\y\r\9\p\s\1\k\e\a\a\8\d\a\6\2\p\d\3\u\i\0\g\1\q\s\k\m\b\9\p\w\2\m\5\n\k\e\t\r\2\v\6\f\q\a\8\r\f\s\g\f\m\w\k\0\h\6\e\q\d\d\1\4\i\9\t\b\f\3\g\z\d\p\6\w\w\p\6\y\n\e\v\c\6\9\h\7\4\p\z\3\m\k\g\9\z\p\5\v\c\a\k\7\6\4\w\5\8\0\c\0\5\3\u\s\7\d\7\y\h\9\1\0\r\i\6\h\k\t\x\h\l\t\c\c\a\5\u\3\g\r\7\8\d\q\p\6\8\4\t\p\j\c\1\k\e\6\8\j\m\6\i\r\8\1\0\b\5\o\m\j\e\1\6\b\j\g\m\2\k\l\l\3\u\k\7\z\q\9\a\8\p\h\4\m\0\8\3\x\n\w\d\f\n\k\3\m\a\6\9\r\d\s\r\d\p\n\h\b\4\6\j\y\n\e\t\y\d\m\1\m\d\1\g\m\2\x\g\o\5\d\h\w\6\j\l\z\7\s\g\b\t\z\1\j\m\m\4\q\9\w\i\g\l\k\c\l\i\b\j\w\f\l\2\i\9\i\p\d\9\5\9\e\q\5\g\l\7\j\j\g\2\n\v\4\l\6\z\g\j\8\6\a\6\w\5\z\3\k\1\6\s\m\p\1\4\s\m\5\e\p\2\7\d\y\j\w\j\t\o\q\7\k\r\z\b\j\o\u\k\5\1\5\v\c\m\4\c\e\1\e\l\u\q\j\7\8\o\f\0\d\b\r\s\v\z\w\a\i\g\8\c\5\2\z\w\o\4\g\t\i\e\4\3\6\8\o\8\e\j\r\z\9\9\6\6\t\a\6\k\8\v\e\e\5\6\9\s\f\z\u\4\b\0\q\o\a\h\e\o\1\2\s\b\7\7\0\n\3\4\n\o\5\x\f\e\v\m\7\3\7\7\r\c\l\6\7\j\6\s\l\3\o\p\4\8\m\9\t\z\s\l\f\3\n\d\y\j\w\s\e\n\m\u\e\l\9\l\c\8\4\6\p\b\d\8\6\e\9\u\n\5\p\p\m\n\o\l\r\q\a\d\d\w\3\h\d\s\r\a\n\2\k\x\4\m\m\0\p\0\t\5\5\v\1\k\s\h\3\r\h\9\0\9\x\3\1\d\r\h\q\6\l\d\l\m\c\d\5\f\b\b\s\q\o\1\s\w\u\9\d\7\2\l\4\u\m\6\1\p\p\l\l\e\c\y\5\o\0\j\d\l\o\z\w\b\j\i\6\0\b\f\5\0\k\d\a\y\r\r\m\4\k\k\i\6\0\w\a\5\n\m\0\9\1\a\v\k\3\q\c\y\f\4 ]] 00:09:26.830 07:18:49 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:26.830 07:18:49 -- dd/uring.sh@69 -- # [[ o095qcxo0onmnmt31d83etrl1dgi5vie3xg40vmpxj9mrrgni8zobif8vhx3f2ttzai1mlgnwuw04dn6i0npyd1bk1mfsm0sbzva7m3tfklmdbmmlr5babs4f6qwa2mw581y9h72u6w0nygcbdb5ohkvlxp2w0jdkjkosiae8sk72mp4p5nqsj1zkm3xdv0ueoe9jrg5iiibc4lb66863gy8qct5itejk08gx3xnumh8oz17v2ztjnbnbpbeq7r3xbtcdt8ov017yqzhfofzzeqdq0ambnd9giopatk74q5dswnsy6t4e71q4wijv54tx7zuprfs43zflx9gztofi9of7vxuvt02go5oot9gfw9ry3doy7thtb61ge88kh8xcsu8znwoxsjhy4efyr9ps1keaa8da62pd3ui0g1qskmb9pw2m5nketr2v6fqa8rfsgfmwk0h6eqdd14i9tbf3gzdp6wwp6ynevc69h74pz3mkg9zp5vcak764w580c053us7d7yh910ri6hktxhltcca5u3gr78dqp684tpjc1ke68jm6ir810b5omje16bjgm2kll3uk7zq9a8ph4m083xnwdfnk3ma69rdsrdpnhb46jynetydm1md1gm2xgo5dhw6jlz7sgbtz1jmm4q9wiglkclibjwfl2i9ipd959eq5gl7jjg2nv4l6zgj86a6w5z3k16smp14sm5ep27dyjwjtoq7krzbjouk515vcm4ce1eluqj78of0dbrsvzwaig8c52zwo4gtie4368o8ejrz9966ta6k8vee569sfzu4b0qoaheo12sb770n34no5xfevm7377rcl67j6sl3op48m9tzslf3ndyjwsenmuel9lc846pbd86e9un5ppmnolrqaddw3hdsran2kx4mm0p0t55v1ksh3rh909x31drhq6ldlmcd5fbbsqo1swu9d72l4um61ppllecy5o0jdlozwbji60bf50kdayrrm4kki60wa5nm091avk3qcyf4 == \o\0\9\5\q\c\x\o\0\o\n\m\n\m\t\3\1\d\8\3\e\t\r\l\1\d\g\i\5\v\i\e\3\x\g\4\0\v\m\p\x\j\9\m\r\r\g\n\i\8\z\o\b\i\f\8\v\h\x\3\f\2\t\t\z\a\i\1\m\l\g\n\w\u\w\0\4\d\n\6\i\0\n\p\y\d\1\b\k\1\m\f\s\m\0\s\b\z\v\a\7\m\3\t\f\k\l\m\d\b\m\m\l\r\5\b\a\b\s\4\f\6\q\w\a\2\m\w\5\8\1\y\9\h\7\2\u\6\w\0\n\y\g\c\b\d\b\5\o\h\k\v\l\x\p\2\w\0\j\d\k\j\k\o\s\i\a\e\8\s\k\7\2\m\p\4\p\5\n\q\s\j\1\z\k\m\3\x\d\v\0\u\e\o\e\9\j\r\g\5\i\i\i\b\c\4\l\b\6\6\8\6\3\g\y\8\q\c\t\5\i\t\e\j\k\0\8\g\x\3\x\n\u\m\h\8\o\z\1\7\v\2\z\t\j\n\b\n\b\p\b\e\q\7\r\3\x\b\t\c\d\t\8\o\v\0\1\7\y\q\z\h\f\o\f\z\z\e\q\d\q\0\a\m\b\n\d\9\g\i\o\p\a\t\k\7\4\q\5\d\s\w\n\s\y\6\t\4\e\7\1\q\4\w\i\j\v\5\4\t\x\7\z\u\p\r\f\s\4\3\z\f\l\x\9\g\z\t\o\f\i\9\o\f\7\v\x\u\v\t\0\2\g\o\5\o\o\t\9\g\f\w\9\r\y\3\d\o\y\7\t\h\t\b\6\1\g\e\8\8\k\h\8\x\c\s\u\8\z\n\w\o\x\s\j\h\y\4\e\f\y\r\9\p\s\1\k\e\a\a\8\d\a\6\2\p\d\3\u\i\0\g\1\q\s\k\m\b\9\p\w\2\m\5\n\k\e\t\r\2\v\6\f\q\a\8\r\f\s\g\f\m\w\k\0\h\6\e\q\d\d\1\4\i\9\t\b\f\3\g\z\d\p\6\w\w\p\6\y\n\e\v\c\6\9\h\7\4\p\z\3\m\k\g\9\z\p\5\v\c\a\k\7\6\4\w\5\8\0\c\0\5\3\u\s\7\d\7\y\h\9\1\0\r\i\6\h\k\t\x\h\l\t\c\c\a\5\u\3\g\r\7\8\d\q\p\6\8\4\t\p\j\c\1\k\e\6\8\j\m\6\i\r\8\1\0\b\5\o\m\j\e\1\6\b\j\g\m\2\k\l\l\3\u\k\7\z\q\9\a\8\p\h\4\m\0\8\3\x\n\w\d\f\n\k\3\m\a\6\9\r\d\s\r\d\p\n\h\b\4\6\j\y\n\e\t\y\d\m\1\m\d\1\g\m\2\x\g\o\5\d\h\w\6\j\l\z\7\s\g\b\t\z\1\j\m\m\4\q\9\w\i\g\l\k\c\l\i\b\j\w\f\l\2\i\9\i\p\d\9\5\9\e\q\5\g\l\7\j\j\g\2\n\v\4\l\6\z\g\j\8\6\a\6\w\5\z\3\k\1\6\s\m\p\1\4\s\m\5\e\p\2\7\d\y\j\w\j\t\o\q\7\k\r\z\b\j\o\u\k\5\1\5\v\c\m\4\c\e\1\e\l\u\q\j\7\8\o\f\0\d\b\r\s\v\z\w\a\i\g\8\c\5\2\z\w\o\4\g\t\i\e\4\3\6\8\o\8\e\j\r\z\9\9\6\6\t\a\6\k\8\v\e\e\5\6\9\s\f\z\u\4\b\0\q\o\a\h\e\o\1\2\s\b\7\7\0\n\3\4\n\o\5\x\f\e\v\m\7\3\7\7\r\c\l\6\7\j\6\s\l\3\o\p\4\8\m\9\t\z\s\l\f\3\n\d\y\j\w\s\e\n\m\u\e\l\9\l\c\8\4\6\p\b\d\8\6\e\9\u\n\5\p\p\m\n\o\l\r\q\a\d\d\w\3\h\d\s\r\a\n\2\k\x\4\m\m\0\p\0\t\5\5\v\1\k\s\h\3\r\h\9\0\9\x\3\1\d\r\h\q\6\l\d\l\m\c\d\5\f\b\b\s\q\o\1\s\w\u\9\d\7\2\l\4\u\m\6\1\p\p\l\l\e\c\y\5\o\0\j\d\l\o\z\w\b\j\i\6\0\b\f\5\0\k\d\a\y\r\r\m\4\k\k\i\6\0\w\a\5\n\m\0\9\1\a\v\k\3\q\c\y\f\4 ]] 00:09:26.830 07:18:49 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:27.398 07:18:49 -- dd/uring.sh@75 -- # gen_conf 00:09:27.398 07:18:49 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:27.398 07:18:49 -- dd/common.sh@31 -- # xtrace_disable 00:09:27.398 07:18:49 -- common/autotest_common.sh@10 -- # set +x 00:09:27.398 [2024-11-28 07:18:49.538794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:27.398 [2024-11-28 07:18:49.538935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71494 ] 00:09:27.398 { 00:09:27.398 "subsystems": [ 00:09:27.398 { 00:09:27.398 "subsystem": "bdev", 00:09:27.398 "config": [ 00:09:27.398 { 00:09:27.398 "params": { 00:09:27.398 "block_size": 512, 00:09:27.398 "num_blocks": 1048576, 00:09:27.398 "name": "malloc0" 00:09:27.398 }, 00:09:27.398 "method": "bdev_malloc_create" 00:09:27.398 }, 00:09:27.398 { 00:09:27.398 "params": { 00:09:27.398 "filename": "/dev/zram1", 00:09:27.398 "name": "uring0" 00:09:27.398 }, 00:09:27.398 "method": "bdev_uring_create" 00:09:27.398 }, 00:09:27.398 { 00:09:27.398 "method": "bdev_wait_for_examine" 00:09:27.398 } 00:09:27.398 ] 00:09:27.398 } 00:09:27.398 ] 00:09:27.398 } 00:09:27.657 [2024-11-28 07:18:49.685076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.657 [2024-11-28 07:18:49.776542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.034  [2024-11-28T07:18:52.245Z] Copying: 155/512 [MB] (155 MBps) [2024-11-28T07:18:53.181Z] Copying: 310/512 [MB] (155 MBps) [2024-11-28T07:18:53.441Z] Copying: 465/512 [MB] (155 MBps) [2024-11-28T07:18:53.707Z] Copying: 512/512 [MB] (average 154 MBps) 00:09:31.432 00:09:31.432 07:18:53 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:31.432 07:18:53 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:31.432 07:18:53 -- dd/uring.sh@87 -- # : 00:09:31.432 07:18:53 -- dd/uring.sh@87 -- # gen_conf 00:09:31.432 07:18:53 -- dd/uring.sh@87 -- # : 00:09:31.432 07:18:53 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:31.432 07:18:53 -- dd/common.sh@31 -- # xtrace_disable 00:09:31.432 07:18:53 -- common/autotest_common.sh@10 -- # set +x 00:09:31.701 [2024-11-28 07:18:53.735885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:31.701 [2024-11-28 07:18:53.736438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71550 ] 00:09:31.701 { 00:09:31.701 "subsystems": [ 00:09:31.701 { 00:09:31.701 "subsystem": "bdev", 00:09:31.701 "config": [ 00:09:31.701 { 00:09:31.701 "params": { 00:09:31.701 "block_size": 512, 00:09:31.701 "num_blocks": 1048576, 00:09:31.701 "name": "malloc0" 00:09:31.701 }, 00:09:31.701 "method": "bdev_malloc_create" 00:09:31.701 }, 00:09:31.701 { 00:09:31.701 "params": { 00:09:31.701 "filename": "/dev/zram1", 00:09:31.701 "name": "uring0" 00:09:31.701 }, 00:09:31.701 "method": "bdev_uring_create" 00:09:31.701 }, 00:09:31.701 { 00:09:31.701 "params": { 00:09:31.701 "name": "uring0" 00:09:31.701 }, 00:09:31.701 "method": "bdev_uring_delete" 00:09:31.701 }, 00:09:31.701 { 00:09:31.701 "method": "bdev_wait_for_examine" 00:09:31.701 } 00:09:31.701 ] 00:09:31.701 } 00:09:31.701 ] 00:09:31.701 } 00:09:31.701 [2024-11-28 07:18:53.869407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.701 [2024-11-28 07:18:53.959935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.961  [2024-11-28T07:18:54.803Z] Copying: 0/0 [B] (average 0 Bps) 00:09:32.528 00:09:32.528 07:18:54 -- dd/uring.sh@94 -- # : 00:09:32.528 07:18:54 -- dd/uring.sh@94 -- # gen_conf 00:09:32.528 07:18:54 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:32.528 07:18:54 -- dd/common.sh@31 -- # xtrace_disable 00:09:32.528 07:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:32.528 07:18:54 -- common/autotest_common.sh@650 -- # local es=0 00:09:32.528 07:18:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:32.528 07:18:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.528 07:18:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:32.529 07:18:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.529 07:18:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:32.529 07:18:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.529 07:18:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:32.529 07:18:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:32.529 07:18:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:32.529 07:18:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:32.529 [2024-11-28 07:18:54.668262] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:32.529 [2024-11-28 07:18:54.668410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71578 ] 00:09:32.529 { 00:09:32.529 "subsystems": [ 00:09:32.529 { 00:09:32.529 "subsystem": "bdev", 00:09:32.529 "config": [ 00:09:32.529 { 00:09:32.529 "params": { 00:09:32.529 "block_size": 512, 00:09:32.529 "num_blocks": 1048576, 00:09:32.529 "name": "malloc0" 00:09:32.529 }, 00:09:32.529 "method": "bdev_malloc_create" 00:09:32.529 }, 00:09:32.529 { 00:09:32.529 "params": { 00:09:32.529 "filename": "/dev/zram1", 00:09:32.529 "name": "uring0" 00:09:32.529 }, 00:09:32.529 "method": "bdev_uring_create" 00:09:32.529 }, 00:09:32.529 { 00:09:32.529 "params": { 00:09:32.529 "name": "uring0" 00:09:32.529 }, 00:09:32.529 "method": "bdev_uring_delete" 00:09:32.529 }, 00:09:32.529 { 00:09:32.529 "method": "bdev_wait_for_examine" 00:09:32.529 } 00:09:32.529 ] 00:09:32.529 } 00:09:32.529 ] 00:09:32.529 } 00:09:32.787 [2024-11-28 07:18:54.808225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.787 [2024-11-28 07:18:54.900715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.045 [2024-11-28 07:18:55.154491] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:33.045 [2024-11-28 07:18:55.154550] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:33.045 [2024-11-28 07:18:55.154564] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:09:33.045 [2024-11-28 07:18:55.154577] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:33.303 [2024-11-28 07:18:55.456583] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:33.303 07:18:55 -- common/autotest_common.sh@653 -- # es=237 00:09:33.303 07:18:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:33.303 07:18:55 -- common/autotest_common.sh@662 -- # es=109 00:09:33.303 07:18:55 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:33.303 07:18:55 -- common/autotest_common.sh@670 -- # es=1 00:09:33.303 07:18:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:33.303 07:18:55 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:33.303 07:18:55 -- dd/common.sh@172 -- # local id=1 00:09:33.303 07:18:55 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:09:33.303 07:18:55 -- dd/common.sh@176 -- # echo 1 00:09:33.303 07:18:55 -- dd/common.sh@177 -- # echo 1 00:09:33.303 07:18:55 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:33.562 00:09:33.562 real 0m15.920s 00:09:33.562 user 0m9.282s 00:09:33.562 sys 0m6.063s 00:09:33.562 ************************************ 00:09:33.562 END TEST dd_uring_copy 00:09:33.562 ************************************ 00:09:33.562 07:18:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:33.562 07:18:55 -- common/autotest_common.sh@10 -- # set +x 00:09:33.562 ************************************ 00:09:33.562 END TEST spdk_dd_uring 00:09:33.562 ************************************ 00:09:33.562 00:09:33.562 real 0m16.159s 00:09:33.562 user 0m9.420s 00:09:33.562 sys 0m6.175s 00:09:33.562 07:18:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:33.562 07:18:55 -- common/autotest_common.sh@10 -- # set +x 00:09:33.562 07:18:55 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:33.562 07:18:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:33.562 07:18:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.562 07:18:55 -- common/autotest_common.sh@10 -- # set +x 00:09:33.562 ************************************ 00:09:33.562 START TEST spdk_dd_sparse 00:09:33.562 ************************************ 00:09:33.562 07:18:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:33.822 * Looking for test storage... 00:09:33.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:33.822 07:18:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:33.822 07:18:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:33.822 07:18:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:33.822 07:18:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:33.822 07:18:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:33.822 07:18:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:33.822 07:18:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:33.822 07:18:55 -- scripts/common.sh@335 -- # IFS=.-: 00:09:33.822 07:18:55 -- scripts/common.sh@335 -- # read -ra ver1 00:09:33.822 07:18:55 -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.822 07:18:55 -- scripts/common.sh@336 -- # read -ra ver2 00:09:33.822 07:18:55 -- scripts/common.sh@337 -- # local 'op=<' 00:09:33.822 07:18:55 -- scripts/common.sh@339 -- # ver1_l=2 00:09:33.822 07:18:55 -- scripts/common.sh@340 -- # ver2_l=1 00:09:33.822 07:18:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:33.822 07:18:55 -- scripts/common.sh@343 -- # case "$op" in 00:09:33.822 07:18:55 -- scripts/common.sh@344 -- # : 1 00:09:33.822 07:18:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:33.822 07:18:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.822 07:18:55 -- scripts/common.sh@364 -- # decimal 1 00:09:33.822 07:18:55 -- scripts/common.sh@352 -- # local d=1 00:09:33.822 07:18:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.822 07:18:55 -- scripts/common.sh@354 -- # echo 1 00:09:33.822 07:18:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:33.822 07:18:55 -- scripts/common.sh@365 -- # decimal 2 00:09:33.822 07:18:55 -- scripts/common.sh@352 -- # local d=2 00:09:33.822 07:18:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.822 07:18:55 -- scripts/common.sh@354 -- # echo 2 00:09:33.822 07:18:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:33.823 07:18:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:33.823 07:18:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:33.823 07:18:55 -- scripts/common.sh@367 -- # return 0 00:09:33.823 07:18:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.823 07:18:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.823 --rc genhtml_branch_coverage=1 00:09:33.823 --rc genhtml_function_coverage=1 00:09:33.823 --rc genhtml_legend=1 00:09:33.823 --rc geninfo_all_blocks=1 00:09:33.823 --rc geninfo_unexecuted_blocks=1 00:09:33.823 00:09:33.823 ' 00:09:33.823 07:18:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.823 --rc genhtml_branch_coverage=1 00:09:33.823 --rc genhtml_function_coverage=1 00:09:33.823 --rc genhtml_legend=1 00:09:33.823 --rc geninfo_all_blocks=1 00:09:33.823 --rc geninfo_unexecuted_blocks=1 00:09:33.823 00:09:33.823 ' 00:09:33.823 07:18:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.823 --rc genhtml_branch_coverage=1 00:09:33.823 --rc genhtml_function_coverage=1 00:09:33.823 --rc genhtml_legend=1 00:09:33.823 --rc geninfo_all_blocks=1 00:09:33.823 --rc geninfo_unexecuted_blocks=1 00:09:33.823 00:09:33.823 ' 00:09:33.823 07:18:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.823 --rc genhtml_branch_coverage=1 00:09:33.823 --rc genhtml_function_coverage=1 00:09:33.823 --rc genhtml_legend=1 00:09:33.823 --rc geninfo_all_blocks=1 00:09:33.823 --rc geninfo_unexecuted_blocks=1 00:09:33.823 00:09:33.823 ' 00:09:33.823 07:18:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.823 07:18:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.823 07:18:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.823 07:18:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.823 07:18:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.823 07:18:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.823 07:18:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.823 07:18:55 -- paths/export.sh@5 -- # export PATH 00:09:33.823 07:18:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.823 07:18:55 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:33.823 07:18:55 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:33.823 07:18:55 -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:33.823 07:18:55 -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:33.823 07:18:55 -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:33.823 07:18:55 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:33.823 07:18:55 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:33.823 07:18:55 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:33.823 07:18:55 -- dd/sparse.sh@118 -- # prepare 00:09:33.823 07:18:55 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:33.823 07:18:55 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:33.823 1+0 records in 00:09:33.823 1+0 records out 00:09:33.823 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00580818 s, 722 MB/s 00:09:33.823 07:18:56 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:33.823 1+0 records in 00:09:33.823 1+0 records out 00:09:33.823 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00616117 s, 681 MB/s 00:09:33.823 07:18:56 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:33.823 1+0 records in 00:09:33.823 1+0 records out 00:09:33.823 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00542693 s, 773 MB/s 00:09:33.823 07:18:56 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:33.823 07:18:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:33.823 07:18:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.823 07:18:56 -- common/autotest_common.sh@10 -- # set +x 00:09:33.823 ************************************ 00:09:33.823 START TEST dd_sparse_file_to_file 00:09:33.823 ************************************ 00:09:33.823 07:18:56 -- common/autotest_common.sh@1114 -- # file_to_file 00:09:33.823 07:18:56 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:33.823 07:18:56 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:33.823 07:18:56 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:33.823 07:18:56 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:33.823 07:18:56 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:33.823 07:18:56 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:33.823 07:18:56 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:33.823 07:18:56 -- dd/sparse.sh@41 -- # gen_conf 00:09:33.823 07:18:56 -- dd/common.sh@31 -- # xtrace_disable 00:09:33.823 07:18:56 -- common/autotest_common.sh@10 -- # set +x 00:09:33.823 [2024-11-28 07:18:56.079678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:33.823 [2024-11-28 07:18:56.079991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71671 ] 00:09:33.823 { 00:09:33.823 "subsystems": [ 00:09:33.823 { 00:09:33.823 "subsystem": "bdev", 00:09:33.823 "config": [ 00:09:33.823 { 00:09:33.823 "params": { 00:09:33.823 "block_size": 4096, 00:09:33.823 "filename": "dd_sparse_aio_disk", 00:09:33.823 "name": "dd_aio" 00:09:33.823 }, 00:09:33.823 "method": "bdev_aio_create" 00:09:33.823 }, 00:09:33.823 { 00:09:33.823 "params": { 00:09:33.823 "lvs_name": "dd_lvstore", 00:09:33.823 "bdev_name": "dd_aio" 00:09:33.823 }, 00:09:33.823 "method": "bdev_lvol_create_lvstore" 00:09:33.823 }, 00:09:33.823 { 00:09:33.823 "method": "bdev_wait_for_examine" 00:09:33.823 } 00:09:33.823 ] 00:09:33.823 } 00:09:33.823 ] 00:09:33.823 } 00:09:34.082 [2024-11-28 07:18:56.221600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.082 [2024-11-28 07:18:56.308133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.341  [2024-11-28T07:18:56.876Z] Copying: 12/36 [MB] (average 1333 MBps) 00:09:34.601 00:09:34.601 07:18:56 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:34.601 07:18:56 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:34.601 07:18:56 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:34.601 07:18:56 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:34.601 07:18:56 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:34.601 07:18:56 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:34.601 07:18:56 -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:34.601 07:18:56 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:34.601 ************************************ 00:09:34.601 END TEST dd_sparse_file_to_file 00:09:34.601 ************************************ 00:09:34.601 07:18:56 -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:34.601 07:18:56 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:34.601 00:09:34.601 real 0m0.694s 00:09:34.601 user 0m0.418s 00:09:34.601 sys 0m0.172s 00:09:34.601 07:18:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:34.601 07:18:56 -- common/autotest_common.sh@10 -- # set +x 00:09:34.601 07:18:56 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:34.601 07:18:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:34.601 07:18:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.601 07:18:56 -- common/autotest_common.sh@10 -- # set +x 00:09:34.601 ************************************ 00:09:34.601 START TEST dd_sparse_file_to_bdev 00:09:34.601 ************************************ 00:09:34.601 07:18:56 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:09:34.601 07:18:56 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:34.601 07:18:56 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:34.601 07:18:56 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:09:34.601 07:18:56 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:34.602 07:18:56 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:34.602 07:18:56 -- dd/sparse.sh@73 -- # gen_conf 00:09:34.602 07:18:56 -- dd/common.sh@31 -- # xtrace_disable 00:09:34.602 07:18:56 -- common/autotest_common.sh@10 -- # set +x 00:09:34.602 [2024-11-28 07:18:56.827029] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:34.602 [2024-11-28 07:18:56.827157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71717 ] 00:09:34.602 { 00:09:34.602 "subsystems": [ 00:09:34.602 { 00:09:34.602 "subsystem": "bdev", 00:09:34.602 "config": [ 00:09:34.602 { 00:09:34.602 "params": { 00:09:34.602 "block_size": 4096, 00:09:34.602 "filename": "dd_sparse_aio_disk", 00:09:34.602 "name": "dd_aio" 00:09:34.602 }, 00:09:34.602 "method": "bdev_aio_create" 00:09:34.602 }, 00:09:34.602 { 00:09:34.602 "params": { 00:09:34.602 "lvs_name": "dd_lvstore", 00:09:34.602 "lvol_name": "dd_lvol", 00:09:34.602 "size": 37748736, 00:09:34.602 "thin_provision": true 00:09:34.602 }, 00:09:34.602 "method": "bdev_lvol_create" 00:09:34.602 }, 00:09:34.602 { 00:09:34.602 "method": "bdev_wait_for_examine" 00:09:34.602 } 00:09:34.602 ] 00:09:34.602 } 00:09:34.602 ] 00:09:34.602 } 00:09:34.861 [2024-11-28 07:18:56.969103] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.861 [2024-11-28 07:18:57.060600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.120 [2024-11-28 07:18:57.156027] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:09:35.120  [2024-11-28T07:18:57.395Z] Copying: 12/36 [MB] (average 571 MBps)[2024-11-28 07:18:57.197126] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:09:35.379 00:09:35.379 00:09:35.379 00:09:35.379 real 0m0.678s 00:09:35.379 user 0m0.436s 00:09:35.379 sys 0m0.172s 00:09:35.379 ************************************ 00:09:35.379 END TEST dd_sparse_file_to_bdev 00:09:35.379 ************************************ 00:09:35.379 07:18:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.379 07:18:57 -- common/autotest_common.sh@10 -- # set +x 00:09:35.379 07:18:57 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:35.379 07:18:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:35.379 07:18:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.379 07:18:57 -- common/autotest_common.sh@10 -- # set +x 00:09:35.379 ************************************ 00:09:35.379 START TEST dd_sparse_bdev_to_file 00:09:35.379 ************************************ 00:09:35.379 07:18:57 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:09:35.379 07:18:57 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:35.379 07:18:57 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:35.379 07:18:57 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:35.379 07:18:57 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:35.379 07:18:57 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:35.379 07:18:57 -- dd/sparse.sh@91 -- # gen_conf 00:09:35.379 07:18:57 -- dd/common.sh@31 -- # xtrace_disable 00:09:35.379 07:18:57 -- common/autotest_common.sh@10 -- # set +x 00:09:35.379 [2024-11-28 07:18:57.548507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:35.379 [2024-11-28 07:18:57.548885] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefi{ 00:09:35.379 "subsystems": [ 00:09:35.379 { 00:09:35.379 "subsystem": "bdev", 00:09:35.379 "config": [ 00:09:35.379 { 00:09:35.379 "params": { 00:09:35.379 "block_size": 4096, 00:09:35.379 "filename": "dd_sparse_aio_disk", 00:09:35.379 "name": "dd_aio" 00:09:35.379 }, 00:09:35.379 "method": "bdev_aio_create" 00:09:35.379 }, 00:09:35.379 { 00:09:35.379 "method": "bdev_wait_for_examine" 00:09:35.379 } 00:09:35.379 ] 00:09:35.379 } 00:09:35.379 ] 00:09:35.379 } 00:09:35.379 x=spdk_pid71753 ] 00:09:35.637 [2024-11-28 07:18:57.688047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.637 [2024-11-28 07:18:57.781032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.637  [2024-11-28T07:18:58.170Z] Copying: 12/36 [MB] (average 1200 MBps) 00:09:35.895 00:09:35.895 07:18:58 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:35.895 07:18:58 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:35.895 07:18:58 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:35.895 07:18:58 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:35.895 07:18:58 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:35.895 07:18:58 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:35.895 07:18:58 -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:35.895 07:18:58 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:35.895 ************************************ 00:09:35.895 END TEST dd_sparse_bdev_to_file 00:09:35.895 ************************************ 00:09:35.895 07:18:58 -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:35.895 07:18:58 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:35.895 00:09:35.895 real 0m0.661s 00:09:35.895 user 0m0.402s 00:09:35.895 sys 0m0.172s 00:09:35.895 07:18:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.895 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.153 07:18:58 -- dd/sparse.sh@1 -- # cleanup 00:09:36.153 07:18:58 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:36.153 07:18:58 -- dd/sparse.sh@12 -- # rm file_zero1 00:09:36.153 07:18:58 -- dd/sparse.sh@13 -- # rm file_zero2 00:09:36.153 07:18:58 -- dd/sparse.sh@14 -- # rm file_zero3 00:09:36.153 ************************************ 00:09:36.153 END TEST spdk_dd_sparse 00:09:36.153 ************************************ 00:09:36.153 00:09:36.153 real 0m2.397s 00:09:36.153 user 0m1.418s 00:09:36.153 sys 0m0.718s 00:09:36.153 07:18:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.153 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.153 07:18:58 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:36.153 07:18:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.153 07:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.153 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.153 ************************************ 00:09:36.153 START TEST spdk_dd_negative 00:09:36.153 ************************************ 00:09:36.153 07:18:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:36.153 * Looking for test storage... 00:09:36.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:36.153 07:18:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:36.153 07:18:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:36.153 07:18:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:36.153 07:18:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:36.153 07:18:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:36.153 07:18:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:36.153 07:18:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:36.153 07:18:58 -- scripts/common.sh@335 -- # IFS=.-: 00:09:36.153 07:18:58 -- scripts/common.sh@335 -- # read -ra ver1 00:09:36.153 07:18:58 -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.153 07:18:58 -- scripts/common.sh@336 -- # read -ra ver2 00:09:36.153 07:18:58 -- scripts/common.sh@337 -- # local 'op=<' 00:09:36.153 07:18:58 -- scripts/common.sh@339 -- # ver1_l=2 00:09:36.153 07:18:58 -- scripts/common.sh@340 -- # ver2_l=1 00:09:36.153 07:18:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:36.153 07:18:58 -- scripts/common.sh@343 -- # case "$op" in 00:09:36.153 07:18:58 -- scripts/common.sh@344 -- # : 1 00:09:36.153 07:18:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:36.153 07:18:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.153 07:18:58 -- scripts/common.sh@364 -- # decimal 1 00:09:36.153 07:18:58 -- scripts/common.sh@352 -- # local d=1 00:09:36.153 07:18:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.153 07:18:58 -- scripts/common.sh@354 -- # echo 1 00:09:36.153 07:18:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:36.153 07:18:58 -- scripts/common.sh@365 -- # decimal 2 00:09:36.153 07:18:58 -- scripts/common.sh@352 -- # local d=2 00:09:36.153 07:18:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.153 07:18:58 -- scripts/common.sh@354 -- # echo 2 00:09:36.153 07:18:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:36.153 07:18:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:36.153 07:18:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:36.153 07:18:58 -- scripts/common.sh@367 -- # return 0 00:09:36.153 07:18:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.153 07:18:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:36.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.153 --rc genhtml_branch_coverage=1 00:09:36.153 --rc genhtml_function_coverage=1 00:09:36.153 --rc genhtml_legend=1 00:09:36.153 --rc geninfo_all_blocks=1 00:09:36.153 --rc geninfo_unexecuted_blocks=1 00:09:36.153 00:09:36.153 ' 00:09:36.154 07:18:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.154 --rc genhtml_branch_coverage=1 00:09:36.154 --rc genhtml_function_coverage=1 00:09:36.154 --rc genhtml_legend=1 00:09:36.154 --rc geninfo_all_blocks=1 00:09:36.154 --rc geninfo_unexecuted_blocks=1 00:09:36.154 00:09:36.154 ' 00:09:36.154 07:18:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.154 --rc genhtml_branch_coverage=1 00:09:36.154 --rc genhtml_function_coverage=1 00:09:36.154 --rc genhtml_legend=1 00:09:36.154 --rc geninfo_all_blocks=1 00:09:36.154 --rc geninfo_unexecuted_blocks=1 00:09:36.154 00:09:36.154 ' 00:09:36.154 07:18:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.154 --rc genhtml_branch_coverage=1 00:09:36.154 --rc genhtml_function_coverage=1 00:09:36.154 --rc genhtml_legend=1 00:09:36.154 --rc geninfo_all_blocks=1 00:09:36.154 --rc geninfo_unexecuted_blocks=1 00:09:36.154 00:09:36.154 ' 00:09:36.154 07:18:58 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.412 07:18:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.412 07:18:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.412 07:18:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.412 07:18:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.412 07:18:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.412 07:18:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.412 07:18:58 -- paths/export.sh@5 -- # export PATH 00:09:36.412 07:18:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.412 07:18:58 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:36.412 07:18:58 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:36.412 07:18:58 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:36.412 07:18:58 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:36.412 07:18:58 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:09:36.412 07:18:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.412 07:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.412 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.412 ************************************ 00:09:36.412 START TEST dd_invalid_arguments 00:09:36.412 ************************************ 00:09:36.412 07:18:58 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:09:36.412 07:18:58 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:36.412 07:18:58 -- common/autotest_common.sh@650 -- # local es=0 00:09:36.412 07:18:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:36.412 07:18:58 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.412 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.412 07:18:58 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.412 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.412 07:18:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.412 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.412 07:18:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.412 07:18:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.412 07:18:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:36.412 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:36.412 options: 00:09:36.412 -c, --config JSON config file (default none) 00:09:36.412 --json JSON config file (default none) 00:09:36.412 --json-ignore-init-errors 00:09:36.412 don't exit on invalid config entry 00:09:36.412 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:36.412 -g, --single-file-segments 00:09:36.412 force creating just one hugetlbfs file 00:09:36.412 -h, --help show this usage 00:09:36.412 -i, --shm-id shared memory ID (optional) 00:09:36.412 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:36.412 --lcores lcore to CPU mapping list. The list is in the format: 00:09:36.412 [<,lcores[@CPUs]>...] 00:09:36.413 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:36.413 Within the group, '-' is used for range separator, 00:09:36.413 ',' is used for single number separator. 00:09:36.413 '( )' can be omitted for single element group, 00:09:36.413 '@' can be omitted if cpus and lcores have the same value 00:09:36.413 -n, --mem-channels channel number of memory channels used for DPDK 00:09:36.413 -p, --main-core main (primary) core for DPDK 00:09:36.413 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:36.413 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:36.413 --disable-cpumask-locks Disable CPU core lock files. 00:09:36.413 --silence-noticelog disable notice level logging to stderr 00:09:36.413 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:36.413 -u, --no-pci disable PCI access 00:09:36.413 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:36.413 --max-delay maximum reactor delay (in microseconds) 00:09:36.413 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:36.413 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:36.413 -R, --huge-unlink unlink huge files after initialization 00:09:36.413 -v, --version print SPDK version 00:09:36.413 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:36.413 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:36.413 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:36.413 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:36.413 Tracepoints vary in size and can use more than one trace entry. 00:09:36.413 --rpcs-allowed comma-separated list of permitted RPCS 00:09:36.413 --env-context Opaque context for use of the env implementation 00:09:36.413 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:36.413 --no-huge run without using hugepages 00:09:36.413 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:36.413 -e, --tpoint-group [:] 00:09:36.413 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:09:36.413 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:36.413 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:36.413 [2024-11-28 07:18:58.498909] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:09:36.413 can be combined (e.g. thread,bdev:0x1). 00:09:36.413 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:36.413 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:36.413 [--------- DD Options ---------] 00:09:36.413 --if Input file. Must specify either --if or --ib. 00:09:36.413 --ib Input bdev. Must specifier either --if or --ib 00:09:36.413 --of Output file. Must specify either --of or --ob. 00:09:36.413 --ob Output bdev. Must specify either --of or --ob. 00:09:36.413 --iflag Input file flags. 00:09:36.413 --oflag Output file flags. 00:09:36.413 --bs I/O unit size (default: 4096) 00:09:36.413 --qd Queue depth (default: 2) 00:09:36.413 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:36.413 --skip Skip this many I/O units at start of input. (default: 0) 00:09:36.413 --seek Skip this many I/O units at start of output. (default: 0) 00:09:36.413 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:36.413 --sparse Enable hole skipping in input target 00:09:36.413 Available iflag and oflag values: 00:09:36.413 append - append mode 00:09:36.413 direct - use direct I/O for data 00:09:36.413 directory - fail unless a directory 00:09:36.413 dsync - use synchronized I/O for data 00:09:36.413 noatime - do not update access time 00:09:36.413 noctty - do not assign controlling terminal from file 00:09:36.413 nofollow - do not follow symlinks 00:09:36.413 nonblock - use non-blocking I/O 00:09:36.413 sync - use synchronized I/O for data and metadata 00:09:36.413 07:18:58 -- common/autotest_common.sh@653 -- # es=2 00:09:36.413 07:18:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.413 07:18:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.413 07:18:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.413 00:09:36.413 real 0m0.071s 00:09:36.413 user 0m0.041s 00:09:36.413 sys 0m0.027s 00:09:36.413 07:18:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.413 ************************************ 00:09:36.413 END TEST dd_invalid_arguments 00:09:36.413 ************************************ 00:09:36.413 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.413 07:18:58 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:09:36.413 07:18:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.413 07:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.413 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.413 ************************************ 00:09:36.413 START TEST dd_double_input 00:09:36.413 ************************************ 00:09:36.413 07:18:58 -- common/autotest_common.sh@1114 -- # double_input 00:09:36.413 07:18:58 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:36.413 07:18:58 -- common/autotest_common.sh@650 -- # local es=0 00:09:36.413 07:18:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:36.413 07:18:58 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.413 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.413 07:18:58 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.413 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.413 07:18:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.413 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.413 07:18:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.413 07:18:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.413 07:18:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:36.413 [2024-11-28 07:18:58.614384] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:36.413 07:18:58 -- common/autotest_common.sh@653 -- # es=22 00:09:36.413 07:18:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.413 07:18:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.413 ************************************ 00:09:36.413 END TEST dd_double_input 00:09:36.413 ************************************ 00:09:36.413 07:18:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.413 00:09:36.413 real 0m0.070s 00:09:36.413 user 0m0.045s 00:09:36.413 sys 0m0.023s 00:09:36.413 07:18:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.413 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.413 07:18:58 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:09:36.413 07:18:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.413 07:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.413 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.413 ************************************ 00:09:36.413 START TEST dd_double_output 00:09:36.413 ************************************ 00:09:36.413 07:18:58 -- common/autotest_common.sh@1114 -- # double_output 00:09:36.413 07:18:58 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:36.413 07:18:58 -- common/autotest_common.sh@650 -- # local es=0 00:09:36.413 07:18:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:36.413 07:18:58 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.413 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.413 07:18:58 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.413 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.413 07:18:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.413 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.413 07:18:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.413 07:18:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.413 07:18:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:36.670 [2024-11-28 07:18:58.735358] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:36.670 07:18:58 -- common/autotest_common.sh@653 -- # es=22 00:09:36.670 07:18:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.671 07:18:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.671 07:18:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.671 00:09:36.671 real 0m0.080s 00:09:36.671 user 0m0.050s 00:09:36.671 sys 0m0.027s 00:09:36.671 07:18:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.671 ************************************ 00:09:36.671 END TEST dd_double_output 00:09:36.671 ************************************ 00:09:36.671 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.671 07:18:58 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:09:36.671 07:18:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.671 07:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.671 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.671 ************************************ 00:09:36.671 START TEST dd_no_input 00:09:36.671 ************************************ 00:09:36.671 07:18:58 -- common/autotest_common.sh@1114 -- # no_input 00:09:36.671 07:18:58 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:36.671 07:18:58 -- common/autotest_common.sh@650 -- # local es=0 00:09:36.671 07:18:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:36.671 07:18:58 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.671 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.671 07:18:58 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.671 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.671 07:18:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.671 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.671 07:18:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.671 07:18:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.671 07:18:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:36.671 [2024-11-28 07:18:58.859194] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:09:36.671 07:18:58 -- common/autotest_common.sh@653 -- # es=22 00:09:36.671 07:18:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.671 07:18:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.671 07:18:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.671 00:09:36.671 real 0m0.067s 00:09:36.671 user 0m0.039s 00:09:36.671 sys 0m0.027s 00:09:36.671 07:18:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.671 ************************************ 00:09:36.671 END TEST dd_no_input 00:09:36.671 ************************************ 00:09:36.671 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.671 07:18:58 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:09:36.671 07:18:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.671 07:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.671 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.671 ************************************ 00:09:36.671 START TEST dd_no_output 00:09:36.671 ************************************ 00:09:36.671 07:18:58 -- common/autotest_common.sh@1114 -- # no_output 00:09:36.671 07:18:58 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:36.671 07:18:58 -- common/autotest_common.sh@650 -- # local es=0 00:09:36.671 07:18:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:36.671 07:18:58 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.671 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.671 07:18:58 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.671 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.671 07:18:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.671 07:18:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.671 07:18:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.671 07:18:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.671 07:18:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:36.929 [2024-11-28 07:18:58.970903] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:09:36.929 07:18:58 -- common/autotest_common.sh@653 -- # es=22 00:09:36.929 07:18:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.929 07:18:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.929 07:18:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.929 00:09:36.929 real 0m0.062s 00:09:36.929 user 0m0.035s 00:09:36.929 sys 0m0.026s 00:09:36.929 07:18:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.929 ************************************ 00:09:36.929 END TEST dd_no_output 00:09:36.929 ************************************ 00:09:36.929 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.929 07:18:59 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:36.929 07:18:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.929 07:18:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.929 07:18:59 -- common/autotest_common.sh@10 -- # set +x 00:09:36.929 ************************************ 00:09:36.929 START TEST dd_wrong_blocksize 00:09:36.929 ************************************ 00:09:36.929 07:18:59 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:09:36.929 07:18:59 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:36.929 07:18:59 -- common/autotest_common.sh@650 -- # local es=0 00:09:36.929 07:18:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:36.929 07:18:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.929 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.929 07:18:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.929 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.929 07:18:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.929 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.929 07:18:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.929 07:18:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.929 07:18:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:36.929 [2024-11-28 07:18:59.085578] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:09:36.929 07:18:59 -- common/autotest_common.sh@653 -- # es=22 00:09:36.929 07:18:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:36.929 ************************************ 00:09:36.929 END TEST dd_wrong_blocksize 00:09:36.929 ************************************ 00:09:36.929 07:18:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:36.929 07:18:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:36.929 00:09:36.929 real 0m0.068s 00:09:36.929 user 0m0.036s 00:09:36.929 sys 0m0.031s 00:09:36.929 07:18:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.929 07:18:59 -- common/autotest_common.sh@10 -- # set +x 00:09:36.929 07:18:59 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:36.929 07:18:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.929 07:18:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.929 07:18:59 -- common/autotest_common.sh@10 -- # set +x 00:09:36.929 ************************************ 00:09:36.929 START TEST dd_smaller_blocksize 00:09:36.929 ************************************ 00:09:36.929 07:18:59 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:09:36.929 07:18:59 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:36.929 07:18:59 -- common/autotest_common.sh@650 -- # local es=0 00:09:36.929 07:18:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:36.929 07:18:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.929 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.929 07:18:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.929 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.929 07:18:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.929 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:36.929 07:18:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:36.929 07:18:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:36.929 07:18:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:37.187 [2024-11-28 07:18:59.203500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:37.187 [2024-11-28 07:18:59.203611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71972 ] 00:09:37.187 [2024-11-28 07:18:59.346383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.187 [2024-11-28 07:18:59.443357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.444 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:37.444 [2024-11-28 07:18:59.530930] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:37.444 [2024-11-28 07:18:59.530967] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:37.444 [2024-11-28 07:18:59.641686] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:37.702 07:18:59 -- common/autotest_common.sh@653 -- # es=244 00:09:37.702 07:18:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.702 07:18:59 -- common/autotest_common.sh@662 -- # es=116 00:09:37.702 07:18:59 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:37.702 07:18:59 -- common/autotest_common.sh@670 -- # es=1 00:09:37.702 07:18:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.702 00:09:37.702 real 0m0.577s 00:09:37.702 user 0m0.326s 00:09:37.702 sys 0m0.145s 00:09:37.702 ************************************ 00:09:37.702 END TEST dd_smaller_blocksize 00:09:37.702 ************************************ 00:09:37.702 07:18:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.702 07:18:59 -- common/autotest_common.sh@10 -- # set +x 00:09:37.702 07:18:59 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:09:37.702 07:18:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:37.702 07:18:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.702 07:18:59 -- common/autotest_common.sh@10 -- # set +x 00:09:37.702 ************************************ 00:09:37.702 START TEST dd_invalid_count 00:09:37.702 ************************************ 00:09:37.702 07:18:59 -- common/autotest_common.sh@1114 -- # invalid_count 00:09:37.702 07:18:59 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:37.702 07:18:59 -- common/autotest_common.sh@650 -- # local es=0 00:09:37.702 07:18:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:37.702 07:18:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.702 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.702 07:18:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.702 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.702 07:18:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.702 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.702 07:18:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.702 07:18:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:37.702 07:18:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:37.702 [2024-11-28 07:18:59.825905] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:09:37.702 07:18:59 -- common/autotest_common.sh@653 -- # es=22 00:09:37.702 07:18:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.702 07:18:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:37.702 07:18:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.702 00:09:37.702 real 0m0.069s 00:09:37.702 user 0m0.046s 00:09:37.702 sys 0m0.022s 00:09:37.702 07:18:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.702 ************************************ 00:09:37.702 END TEST dd_invalid_count 00:09:37.702 ************************************ 00:09:37.702 07:18:59 -- common/autotest_common.sh@10 -- # set +x 00:09:37.702 07:18:59 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:09:37.702 07:18:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:37.702 07:18:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.702 07:18:59 -- common/autotest_common.sh@10 -- # set +x 00:09:37.702 ************************************ 00:09:37.702 START TEST dd_invalid_oflag 00:09:37.702 ************************************ 00:09:37.702 07:18:59 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:09:37.702 07:18:59 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:37.702 07:18:59 -- common/autotest_common.sh@650 -- # local es=0 00:09:37.702 07:18:59 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:37.702 07:18:59 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.702 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.702 07:18:59 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.702 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.702 07:18:59 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.702 07:18:59 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.702 07:18:59 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.702 07:18:59 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:37.702 07:18:59 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:37.702 [2024-11-28 07:18:59.935487] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:09:37.702 ************************************ 00:09:37.702 END TEST dd_invalid_oflag 00:09:37.702 ************************************ 00:09:37.702 07:18:59 -- common/autotest_common.sh@653 -- # es=22 00:09:37.702 07:18:59 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.702 07:18:59 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:37.702 07:18:59 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.702 00:09:37.702 real 0m0.060s 00:09:37.702 user 0m0.037s 00:09:37.702 sys 0m0.022s 00:09:37.702 07:18:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.702 07:18:59 -- common/autotest_common.sh@10 -- # set +x 00:09:37.959 07:18:59 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:09:37.959 07:18:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:37.959 07:18:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.959 07:18:59 -- common/autotest_common.sh@10 -- # set +x 00:09:37.959 ************************************ 00:09:37.959 START TEST dd_invalid_iflag 00:09:37.959 ************************************ 00:09:37.959 07:19:00 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:09:37.959 07:19:00 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:37.959 07:19:00 -- common/autotest_common.sh@650 -- # local es=0 00:09:37.959 07:19:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:37.959 07:19:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.959 07:19:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.959 07:19:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.959 07:19:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.959 07:19:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.959 07:19:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.959 07:19:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.960 07:19:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:37.960 07:19:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:37.960 [2024-11-28 07:19:00.067861] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:09:37.960 07:19:00 -- common/autotest_common.sh@653 -- # es=22 00:09:37.960 07:19:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:37.960 07:19:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:37.960 07:19:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:37.960 00:09:37.960 real 0m0.096s 00:09:37.960 user 0m0.063s 00:09:37.960 sys 0m0.032s 00:09:37.960 07:19:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.960 07:19:00 -- common/autotest_common.sh@10 -- # set +x 00:09:37.960 ************************************ 00:09:37.960 END TEST dd_invalid_iflag 00:09:37.960 ************************************ 00:09:37.960 07:19:00 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:09:37.960 07:19:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:37.960 07:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.960 07:19:00 -- common/autotest_common.sh@10 -- # set +x 00:09:37.960 ************************************ 00:09:37.960 START TEST dd_unknown_flag 00:09:37.960 ************************************ 00:09:37.960 07:19:00 -- common/autotest_common.sh@1114 -- # unknown_flag 00:09:37.960 07:19:00 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:37.960 07:19:00 -- common/autotest_common.sh@650 -- # local es=0 00:09:37.960 07:19:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:37.960 07:19:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.960 07:19:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.960 07:19:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.960 07:19:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.960 07:19:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.960 07:19:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:37.960 07:19:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:37.960 07:19:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:37.960 07:19:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:37.960 [2024-11-28 07:19:00.196482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:37.960 [2024-11-28 07:19:00.196828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72064 ] 00:09:38.217 [2024-11-28 07:19:00.335977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.217 [2024-11-28 07:19:00.424364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.475 [2024-11-28 07:19:00.511075] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:09:38.475 [2024-11-28 07:19:00.511373] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:38.475 [2024-11-28 07:19:00.511419] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:38.475 [2024-11-28 07:19:00.511462] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.475 [2024-11-28 07:19:00.637586] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:38.475 07:19:00 -- common/autotest_common.sh@653 -- # es=236 00:09:38.475 07:19:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:38.475 07:19:00 -- common/autotest_common.sh@662 -- # es=108 00:09:38.475 07:19:00 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:38.475 07:19:00 -- common/autotest_common.sh@670 -- # es=1 00:09:38.475 07:19:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:38.475 00:09:38.475 real 0m0.581s 00:09:38.475 user 0m0.328s 00:09:38.475 sys 0m0.146s 00:09:38.475 07:19:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.475 07:19:00 -- common/autotest_common.sh@10 -- # set +x 00:09:38.475 ************************************ 00:09:38.475 END TEST dd_unknown_flag 00:09:38.475 ************************************ 00:09:38.733 07:19:00 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:09:38.733 07:19:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:38.733 07:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:38.733 07:19:00 -- common/autotest_common.sh@10 -- # set +x 00:09:38.733 ************************************ 00:09:38.733 START TEST dd_invalid_json 00:09:38.733 ************************************ 00:09:38.733 07:19:00 -- common/autotest_common.sh@1114 -- # invalid_json 00:09:38.733 07:19:00 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:38.733 07:19:00 -- dd/negative_dd.sh@95 -- # : 00:09:38.733 07:19:00 -- common/autotest_common.sh@650 -- # local es=0 00:09:38.733 07:19:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:38.733 07:19:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.733 07:19:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:38.733 07:19:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.733 07:19:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:38.733 07:19:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.733 07:19:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:38.733 07:19:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:38.733 07:19:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:38.733 07:19:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:38.733 [2024-11-28 07:19:00.827624] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:38.733 [2024-11-28 07:19:00.827739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72097 ] 00:09:38.733 [2024-11-28 07:19:00.969064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.990 [2024-11-28 07:19:01.059337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.991 [2024-11-28 07:19:01.059514] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:09:38.991 [2024-11-28 07:19:01.059539] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:38.991 [2024-11-28 07:19:01.059584] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:38.991 07:19:01 -- common/autotest_common.sh@653 -- # es=234 00:09:38.991 07:19:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:38.991 07:19:01 -- common/autotest_common.sh@662 -- # es=106 00:09:38.991 ************************************ 00:09:38.991 END TEST dd_invalid_json 00:09:38.991 ************************************ 00:09:38.991 07:19:01 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:38.991 07:19:01 -- common/autotest_common.sh@670 -- # es=1 00:09:38.991 07:19:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:38.991 00:09:38.991 real 0m0.373s 00:09:38.991 user 0m0.197s 00:09:38.991 sys 0m0.073s 00:09:38.991 07:19:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.991 07:19:01 -- common/autotest_common.sh@10 -- # set +x 00:09:38.991 ************************************ 00:09:38.991 END TEST spdk_dd_negative 00:09:38.991 ************************************ 00:09:38.991 00:09:38.991 real 0m2.923s 00:09:38.991 user 0m1.541s 00:09:38.991 sys 0m1.018s 00:09:38.991 07:19:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.991 07:19:01 -- common/autotest_common.sh@10 -- # set +x 00:09:38.991 ************************************ 00:09:38.991 END TEST spdk_dd 00:09:38.991 ************************************ 00:09:38.991 00:09:38.991 real 1m16.457s 00:09:38.991 user 0m47.308s 00:09:38.991 sys 0m19.987s 00:09:38.991 07:19:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:38.991 07:19:01 -- common/autotest_common.sh@10 -- # set +x 00:09:38.991 07:19:01 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:09:38.991 07:19:01 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:09:38.991 07:19:01 -- spdk/autotest.sh@255 -- # timing_exit lib 00:09:38.991 07:19:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:38.991 07:19:01 -- common/autotest_common.sh@10 -- # set +x 00:09:39.249 07:19:01 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:09:39.249 07:19:01 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:09:39.249 07:19:01 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:09:39.249 07:19:01 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:09:39.249 07:19:01 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:09:39.249 07:19:01 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:09:39.249 07:19:01 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:39.249 07:19:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:39.249 07:19:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:39.249 07:19:01 -- common/autotest_common.sh@10 -- # set +x 00:09:39.249 ************************************ 00:09:39.249 START TEST nvmf_tcp 00:09:39.249 ************************************ 00:09:39.249 07:19:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:39.249 * Looking for test storage... 00:09:39.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:39.249 07:19:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:39.249 07:19:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:39.249 07:19:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:39.249 07:19:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:39.249 07:19:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:39.249 07:19:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:39.249 07:19:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:39.249 07:19:01 -- scripts/common.sh@335 -- # IFS=.-: 00:09:39.249 07:19:01 -- scripts/common.sh@335 -- # read -ra ver1 00:09:39.249 07:19:01 -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.249 07:19:01 -- scripts/common.sh@336 -- # read -ra ver2 00:09:39.249 07:19:01 -- scripts/common.sh@337 -- # local 'op=<' 00:09:39.249 07:19:01 -- scripts/common.sh@339 -- # ver1_l=2 00:09:39.249 07:19:01 -- scripts/common.sh@340 -- # ver2_l=1 00:09:39.249 07:19:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:39.249 07:19:01 -- scripts/common.sh@343 -- # case "$op" in 00:09:39.249 07:19:01 -- scripts/common.sh@344 -- # : 1 00:09:39.249 07:19:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:39.249 07:19:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.249 07:19:01 -- scripts/common.sh@364 -- # decimal 1 00:09:39.249 07:19:01 -- scripts/common.sh@352 -- # local d=1 00:09:39.249 07:19:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.249 07:19:01 -- scripts/common.sh@354 -- # echo 1 00:09:39.249 07:19:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:39.249 07:19:01 -- scripts/common.sh@365 -- # decimal 2 00:09:39.249 07:19:01 -- scripts/common.sh@352 -- # local d=2 00:09:39.249 07:19:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.249 07:19:01 -- scripts/common.sh@354 -- # echo 2 00:09:39.249 07:19:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:39.250 07:19:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:39.250 07:19:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:39.250 07:19:01 -- scripts/common.sh@367 -- # return 0 00:09:39.250 07:19:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.250 07:19:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:39.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.250 --rc genhtml_branch_coverage=1 00:09:39.250 --rc genhtml_function_coverage=1 00:09:39.250 --rc genhtml_legend=1 00:09:39.250 --rc geninfo_all_blocks=1 00:09:39.250 --rc geninfo_unexecuted_blocks=1 00:09:39.250 00:09:39.250 ' 00:09:39.250 07:19:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:39.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.250 --rc genhtml_branch_coverage=1 00:09:39.250 --rc genhtml_function_coverage=1 00:09:39.250 --rc genhtml_legend=1 00:09:39.250 --rc geninfo_all_blocks=1 00:09:39.250 --rc geninfo_unexecuted_blocks=1 00:09:39.250 00:09:39.250 ' 00:09:39.250 07:19:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:39.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.250 --rc genhtml_branch_coverage=1 00:09:39.250 --rc genhtml_function_coverage=1 00:09:39.250 --rc genhtml_legend=1 00:09:39.250 --rc geninfo_all_blocks=1 00:09:39.250 --rc geninfo_unexecuted_blocks=1 00:09:39.250 00:09:39.250 ' 00:09:39.250 07:19:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:39.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.250 --rc genhtml_branch_coverage=1 00:09:39.250 --rc genhtml_function_coverage=1 00:09:39.250 --rc genhtml_legend=1 00:09:39.250 --rc geninfo_all_blocks=1 00:09:39.250 --rc geninfo_unexecuted_blocks=1 00:09:39.250 00:09:39.250 ' 00:09:39.250 07:19:01 -- nvmf/nvmf.sh@10 -- # uname -s 00:09:39.250 07:19:01 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:39.250 07:19:01 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:39.250 07:19:01 -- nvmf/common.sh@7 -- # uname -s 00:09:39.250 07:19:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.250 07:19:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.250 07:19:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.250 07:19:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.250 07:19:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.250 07:19:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.250 07:19:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.250 07:19:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.250 07:19:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.250 07:19:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.250 07:19:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:09:39.250 07:19:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:09:39.250 07:19:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.250 07:19:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.250 07:19:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:39.250 07:19:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.250 07:19:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.250 07:19:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.250 07:19:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.250 07:19:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.250 07:19:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.250 07:19:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.250 07:19:01 -- paths/export.sh@5 -- # export PATH 00:09:39.250 07:19:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.250 07:19:01 -- nvmf/common.sh@46 -- # : 0 00:09:39.250 07:19:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:39.250 07:19:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:39.250 07:19:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:39.250 07:19:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.250 07:19:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.250 07:19:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:39.250 07:19:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:39.250 07:19:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:39.250 07:19:01 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:39.250 07:19:01 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:39.250 07:19:01 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:39.250 07:19:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:39.250 07:19:01 -- common/autotest_common.sh@10 -- # set +x 00:09:39.509 07:19:01 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:09:39.509 07:19:01 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:39.509 07:19:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:39.509 07:19:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:39.509 07:19:01 -- common/autotest_common.sh@10 -- # set +x 00:09:39.509 ************************************ 00:09:39.509 START TEST nvmf_host_management 00:09:39.509 ************************************ 00:09:39.509 07:19:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:39.509 * Looking for test storage... 00:09:39.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:39.509 07:19:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:39.509 07:19:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:39.509 07:19:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:39.509 07:19:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:39.509 07:19:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:39.509 07:19:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:39.509 07:19:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:39.509 07:19:01 -- scripts/common.sh@335 -- # IFS=.-: 00:09:39.509 07:19:01 -- scripts/common.sh@335 -- # read -ra ver1 00:09:39.509 07:19:01 -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.509 07:19:01 -- scripts/common.sh@336 -- # read -ra ver2 00:09:39.509 07:19:01 -- scripts/common.sh@337 -- # local 'op=<' 00:09:39.509 07:19:01 -- scripts/common.sh@339 -- # ver1_l=2 00:09:39.509 07:19:01 -- scripts/common.sh@340 -- # ver2_l=1 00:09:39.509 07:19:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:39.509 07:19:01 -- scripts/common.sh@343 -- # case "$op" in 00:09:39.509 07:19:01 -- scripts/common.sh@344 -- # : 1 00:09:39.509 07:19:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:39.509 07:19:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.509 07:19:01 -- scripts/common.sh@364 -- # decimal 1 00:09:39.509 07:19:01 -- scripts/common.sh@352 -- # local d=1 00:09:39.509 07:19:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.509 07:19:01 -- scripts/common.sh@354 -- # echo 1 00:09:39.509 07:19:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:39.509 07:19:01 -- scripts/common.sh@365 -- # decimal 2 00:09:39.509 07:19:01 -- scripts/common.sh@352 -- # local d=2 00:09:39.509 07:19:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.509 07:19:01 -- scripts/common.sh@354 -- # echo 2 00:09:39.509 07:19:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:39.509 07:19:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:39.509 07:19:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:39.509 07:19:01 -- scripts/common.sh@367 -- # return 0 00:09:39.509 07:19:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.509 07:19:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:39.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.509 --rc genhtml_branch_coverage=1 00:09:39.509 --rc genhtml_function_coverage=1 00:09:39.509 --rc genhtml_legend=1 00:09:39.509 --rc geninfo_all_blocks=1 00:09:39.509 --rc geninfo_unexecuted_blocks=1 00:09:39.509 00:09:39.509 ' 00:09:39.509 07:19:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:39.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.509 --rc genhtml_branch_coverage=1 00:09:39.509 --rc genhtml_function_coverage=1 00:09:39.509 --rc genhtml_legend=1 00:09:39.509 --rc geninfo_all_blocks=1 00:09:39.509 --rc geninfo_unexecuted_blocks=1 00:09:39.509 00:09:39.509 ' 00:09:39.509 07:19:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:39.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.509 --rc genhtml_branch_coverage=1 00:09:39.509 --rc genhtml_function_coverage=1 00:09:39.509 --rc genhtml_legend=1 00:09:39.509 --rc geninfo_all_blocks=1 00:09:39.509 --rc geninfo_unexecuted_blocks=1 00:09:39.509 00:09:39.509 ' 00:09:39.509 07:19:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:39.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.509 --rc genhtml_branch_coverage=1 00:09:39.509 --rc genhtml_function_coverage=1 00:09:39.509 --rc genhtml_legend=1 00:09:39.509 --rc geninfo_all_blocks=1 00:09:39.509 --rc geninfo_unexecuted_blocks=1 00:09:39.509 00:09:39.509 ' 00:09:39.509 07:19:01 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:39.509 07:19:01 -- nvmf/common.sh@7 -- # uname -s 00:09:39.509 07:19:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.509 07:19:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.509 07:19:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.509 07:19:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.509 07:19:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.509 07:19:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.509 07:19:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.509 07:19:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.510 07:19:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.510 07:19:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.510 07:19:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:09:39.510 07:19:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:09:39.510 07:19:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.510 07:19:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.510 07:19:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:39.510 07:19:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.510 07:19:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.510 07:19:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.510 07:19:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.510 07:19:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.510 07:19:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.510 07:19:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.510 07:19:01 -- paths/export.sh@5 -- # export PATH 00:09:39.510 07:19:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.510 07:19:01 -- nvmf/common.sh@46 -- # : 0 00:09:39.510 07:19:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:39.510 07:19:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:39.510 07:19:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:39.510 07:19:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.510 07:19:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.510 07:19:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:39.510 07:19:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:39.510 07:19:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:39.510 07:19:01 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.510 07:19:01 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.510 07:19:01 -- target/host_management.sh@104 -- # nvmftestinit 00:09:39.510 07:19:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:39.768 07:19:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.768 07:19:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:39.768 07:19:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:39.768 07:19:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:39.768 07:19:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.768 07:19:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.768 07:19:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.768 07:19:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:39.768 07:19:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:39.768 07:19:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:39.768 07:19:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:39.768 07:19:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:39.768 07:19:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:39.768 07:19:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.768 07:19:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.768 07:19:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:39.768 07:19:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:39.768 07:19:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:39.768 07:19:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:39.768 07:19:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:39.768 07:19:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.768 07:19:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:39.768 07:19:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:39.768 07:19:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:39.768 07:19:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:39.768 07:19:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:39.768 Cannot find device "nvmf_init_br" 00:09:39.768 07:19:01 -- nvmf/common.sh@153 -- # true 00:09:39.768 07:19:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:39.768 Cannot find device "nvmf_tgt_br" 00:09:39.768 07:19:01 -- nvmf/common.sh@154 -- # true 00:09:39.768 07:19:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:39.768 Cannot find device "nvmf_tgt_br2" 00:09:39.768 07:19:01 -- nvmf/common.sh@155 -- # true 00:09:39.768 07:19:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:39.768 Cannot find device "nvmf_init_br" 00:09:39.768 07:19:01 -- nvmf/common.sh@156 -- # true 00:09:39.768 07:19:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:39.768 Cannot find device "nvmf_tgt_br" 00:09:39.768 07:19:01 -- nvmf/common.sh@157 -- # true 00:09:39.768 07:19:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:39.768 Cannot find device "nvmf_tgt_br2" 00:09:39.768 07:19:01 -- nvmf/common.sh@158 -- # true 00:09:39.768 07:19:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:39.768 Cannot find device "nvmf_br" 00:09:39.768 07:19:01 -- nvmf/common.sh@159 -- # true 00:09:39.768 07:19:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:39.768 Cannot find device "nvmf_init_if" 00:09:39.768 07:19:01 -- nvmf/common.sh@160 -- # true 00:09:39.768 07:19:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:39.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.768 07:19:01 -- nvmf/common.sh@161 -- # true 00:09:39.768 07:19:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:39.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.768 07:19:01 -- nvmf/common.sh@162 -- # true 00:09:39.768 07:19:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:39.768 07:19:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:39.768 07:19:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:39.769 07:19:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:39.769 07:19:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:39.769 07:19:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:39.769 07:19:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:39.769 07:19:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:39.769 07:19:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:39.769 07:19:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:39.769 07:19:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:39.769 07:19:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:39.769 07:19:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:39.769 07:19:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:39.769 07:19:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:39.769 07:19:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:39.769 07:19:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:40.027 07:19:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:40.027 07:19:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:40.027 07:19:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:40.027 07:19:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:40.027 07:19:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:40.027 07:19:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:40.027 07:19:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:40.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:09:40.027 00:09:40.027 --- 10.0.0.2 ping statistics --- 00:09:40.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.027 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:09:40.027 07:19:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:40.027 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:40.027 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:40.027 00:09:40.027 --- 10.0.0.3 ping statistics --- 00:09:40.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.027 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:40.027 07:19:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:40.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:40.027 00:09:40.027 --- 10.0.0.1 ping statistics --- 00:09:40.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.027 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:40.027 07:19:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.027 07:19:02 -- nvmf/common.sh@421 -- # return 0 00:09:40.027 07:19:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:40.027 07:19:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.027 07:19:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:40.027 07:19:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:40.027 07:19:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.027 07:19:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:40.027 07:19:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:40.027 07:19:02 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:09:40.027 07:19:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:40.027 07:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:40.027 07:19:02 -- common/autotest_common.sh@10 -- # set +x 00:09:40.027 ************************************ 00:09:40.027 START TEST nvmf_host_management 00:09:40.027 ************************************ 00:09:40.027 07:19:02 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:09:40.027 07:19:02 -- target/host_management.sh@69 -- # starttarget 00:09:40.027 07:19:02 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:40.027 07:19:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:40.027 07:19:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:40.027 07:19:02 -- common/autotest_common.sh@10 -- # set +x 00:09:40.027 07:19:02 -- nvmf/common.sh@469 -- # nvmfpid=72368 00:09:40.027 07:19:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:40.027 07:19:02 -- nvmf/common.sh@470 -- # waitforlisten 72368 00:09:40.027 07:19:02 -- common/autotest_common.sh@829 -- # '[' -z 72368 ']' 00:09:40.027 07:19:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.027 07:19:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:40.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.027 07:19:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.027 07:19:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:40.027 07:19:02 -- common/autotest_common.sh@10 -- # set +x 00:09:40.027 [2024-11-28 07:19:02.286676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:40.027 [2024-11-28 07:19:02.286808] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.286 [2024-11-28 07:19:02.432440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.286 [2024-11-28 07:19:02.527108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:40.286 [2024-11-28 07:19:02.527264] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.286 [2024-11-28 07:19:02.527278] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.286 [2024-11-28 07:19:02.527287] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.286 [2024-11-28 07:19:02.527420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.286 [2024-11-28 07:19:02.528053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.286 [2024-11-28 07:19:02.528245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:40.286 [2024-11-28 07:19:02.528251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.220 07:19:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:41.220 07:19:03 -- common/autotest_common.sh@862 -- # return 0 00:09:41.220 07:19:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:41.220 07:19:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:41.220 07:19:03 -- common/autotest_common.sh@10 -- # set +x 00:09:41.220 07:19:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.220 07:19:03 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.220 07:19:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.220 07:19:03 -- common/autotest_common.sh@10 -- # set +x 00:09:41.220 [2024-11-28 07:19:03.367475] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.220 07:19:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.220 07:19:03 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:41.220 07:19:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:41.220 07:19:03 -- common/autotest_common.sh@10 -- # set +x 00:09:41.220 07:19:03 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:41.220 07:19:03 -- target/host_management.sh@23 -- # cat 00:09:41.220 07:19:03 -- target/host_management.sh@30 -- # rpc_cmd 00:09:41.220 07:19:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.220 07:19:03 -- common/autotest_common.sh@10 -- # set +x 00:09:41.220 Malloc0 00:09:41.220 [2024-11-28 07:19:03.440330] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.220 07:19:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.220 07:19:03 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:41.220 07:19:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:41.220 07:19:03 -- common/autotest_common.sh@10 -- # set +x 00:09:41.220 07:19:03 -- target/host_management.sh@73 -- # perfpid=72422 00:09:41.221 07:19:03 -- target/host_management.sh@74 -- # waitforlisten 72422 /var/tmp/bdevperf.sock 00:09:41.221 07:19:03 -- common/autotest_common.sh@829 -- # '[' -z 72422 ']' 00:09:41.221 07:19:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:41.221 07:19:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:41.221 07:19:03 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:41.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:41.221 07:19:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:41.221 07:19:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:41.221 07:19:03 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:41.221 07:19:03 -- common/autotest_common.sh@10 -- # set +x 00:09:41.221 07:19:03 -- nvmf/common.sh@520 -- # config=() 00:09:41.221 07:19:03 -- nvmf/common.sh@520 -- # local subsystem config 00:09:41.221 07:19:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:41.221 07:19:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:41.221 { 00:09:41.221 "params": { 00:09:41.221 "name": "Nvme$subsystem", 00:09:41.221 "trtype": "$TEST_TRANSPORT", 00:09:41.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.221 "adrfam": "ipv4", 00:09:41.221 "trsvcid": "$NVMF_PORT", 00:09:41.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.221 "hdgst": ${hdgst:-false}, 00:09:41.221 "ddgst": ${ddgst:-false} 00:09:41.221 }, 00:09:41.221 "method": "bdev_nvme_attach_controller" 00:09:41.221 } 00:09:41.221 EOF 00:09:41.221 )") 00:09:41.221 07:19:03 -- nvmf/common.sh@542 -- # cat 00:09:41.479 07:19:03 -- nvmf/common.sh@544 -- # jq . 00:09:41.480 07:19:03 -- nvmf/common.sh@545 -- # IFS=, 00:09:41.480 07:19:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:41.480 "params": { 00:09:41.480 "name": "Nvme0", 00:09:41.480 "trtype": "tcp", 00:09:41.480 "traddr": "10.0.0.2", 00:09:41.480 "adrfam": "ipv4", 00:09:41.480 "trsvcid": "4420", 00:09:41.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:41.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:41.480 "hdgst": false, 00:09:41.480 "ddgst": false 00:09:41.480 }, 00:09:41.480 "method": "bdev_nvme_attach_controller" 00:09:41.480 }' 00:09:41.480 [2024-11-28 07:19:03.545008] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:41.480 [2024-11-28 07:19:03.545130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72422 ] 00:09:41.480 [2024-11-28 07:19:03.692050] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.738 [2024-11-28 07:19:03.787512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.738 Running I/O for 10 seconds... 00:09:42.674 07:19:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.674 07:19:04 -- common/autotest_common.sh@862 -- # return 0 00:09:42.674 07:19:04 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:42.674 07:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.674 07:19:04 -- common/autotest_common.sh@10 -- # set +x 00:09:42.674 07:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.674 07:19:04 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:42.674 07:19:04 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:42.674 07:19:04 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:42.674 07:19:04 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:42.674 07:19:04 -- target/host_management.sh@52 -- # local ret=1 00:09:42.674 07:19:04 -- target/host_management.sh@53 -- # local i 00:09:42.674 07:19:04 -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:42.674 07:19:04 -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:42.674 07:19:04 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:42.674 07:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.674 07:19:04 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:42.674 07:19:04 -- common/autotest_common.sh@10 -- # set +x 00:09:42.674 07:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.674 07:19:04 -- target/host_management.sh@55 -- # read_io_count=1791 00:09:42.674 07:19:04 -- target/host_management.sh@58 -- # '[' 1791 -ge 100 ']' 00:09:42.674 07:19:04 -- target/host_management.sh@59 -- # ret=0 00:09:42.674 07:19:04 -- target/host_management.sh@60 -- # break 00:09:42.674 07:19:04 -- target/host_management.sh@64 -- # return 0 00:09:42.674 07:19:04 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:42.674 07:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.674 07:19:04 -- common/autotest_common.sh@10 -- # set +x 00:09:42.674 [2024-11-28 07:19:04.670305] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670367] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670379] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670389] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670406] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670415] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670466] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670508] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670549] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.674 [2024-11-28 07:19:04.670557] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670581] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670597] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670652] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670662] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670670] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670729] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670738] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670747] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b6c330 is same with the state(5) to be set 00:09:42.675 [2024-11-28 07:19:04.670840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.670880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.670907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.670918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.670931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.670941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.670953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.670962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.670974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.670984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.670996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.675 [2024-11-28 07:19:04.671536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.675 [2024-11-28 07:19:04.671545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.671986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.671998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:42.676 [2024-11-28 07:19:04.672287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.676 [2024-11-28 07:19:04.672298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d460 is same with the state(5) to be set 00:09:42.676 [2024-11-28 07:19:04.672379] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x198d460 was disconnected and freed. reset controller. 00:09:42.676 [2024-11-28 07:19:04.673510] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:09:42.676 task offset: 115584 on job bdev=Nvme0n1 fails 00:09:42.676 00:09:42.676 Latency(us) 00:09:42.676 [2024-11-28T07:19:04.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.676 [2024-11-28T07:19:04.951Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:42.676 [2024-11-28T07:19:04.951Z] Job: Nvme0n1 ended in about 0.71 seconds with error 00:09:42.676 Verification LBA range: start 0x0 length 0x400 00:09:42.676 Nvme0n1 : 0.71 2697.04 168.57 90.51 0.00 22569.22 2427.81 28597.53 00:09:42.676 [2024-11-28T07:19:04.951Z] =================================================================================================================== 00:09:42.676 [2024-11-28T07:19:04.951Z] Total : 2697.04 168.57 90.51 0.00 22569.22 2427.81 28597.53 00:09:42.677 [2024-11-28 07:19:04.675931] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:42.677 [2024-11-28 07:19:04.675969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198eda0 (9): Bad file descriptor 00:09:42.677 07:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.677 07:19:04 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:42.677 07:19:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.677 07:19:04 -- common/autotest_common.sh@10 -- # set +x 00:09:42.677 [2024-11-28 07:19:04.680920] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:09:42.677 [2024-11-28 07:19:04.681024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:09:42.677 [2024-11-28 07:19:04.681049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:42.677 [2024-11-28 07:19:04.681068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:09:42.677 [2024-11-28 07:19:04.681080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:09:42.677 [2024-11-28 07:19:04.681089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:09:42.677 [2024-11-28 07:19:04.681098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x198eda0 00:09:42.677 [2024-11-28 07:19:04.681132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198eda0 (9): Bad file descriptor 00:09:42.677 [2024-11-28 07:19:04.681151] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:09:42.677 [2024-11-28 07:19:04.681161] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:09:42.677 [2024-11-28 07:19:04.681172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:09:42.677 [2024-11-28 07:19:04.681190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:09:42.677 07:19:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.677 07:19:04 -- target/host_management.sh@87 -- # sleep 1 00:09:43.625 07:19:05 -- target/host_management.sh@91 -- # kill -9 72422 00:09:43.625 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72422) - No such process 00:09:43.625 07:19:05 -- target/host_management.sh@91 -- # true 00:09:43.625 07:19:05 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:43.625 07:19:05 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:43.625 07:19:05 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:43.625 07:19:05 -- nvmf/common.sh@520 -- # config=() 00:09:43.625 07:19:05 -- nvmf/common.sh@520 -- # local subsystem config 00:09:43.625 07:19:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:43.625 07:19:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:43.625 { 00:09:43.625 "params": { 00:09:43.625 "name": "Nvme$subsystem", 00:09:43.625 "trtype": "$TEST_TRANSPORT", 00:09:43.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.625 "adrfam": "ipv4", 00:09:43.625 "trsvcid": "$NVMF_PORT", 00:09:43.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.625 "hdgst": ${hdgst:-false}, 00:09:43.625 "ddgst": ${ddgst:-false} 00:09:43.625 }, 00:09:43.625 "method": "bdev_nvme_attach_controller" 00:09:43.625 } 00:09:43.625 EOF 00:09:43.625 )") 00:09:43.625 07:19:05 -- nvmf/common.sh@542 -- # cat 00:09:43.625 07:19:05 -- nvmf/common.sh@544 -- # jq . 00:09:43.625 07:19:05 -- nvmf/common.sh@545 -- # IFS=, 00:09:43.625 07:19:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:43.625 "params": { 00:09:43.625 "name": "Nvme0", 00:09:43.626 "trtype": "tcp", 00:09:43.626 "traddr": "10.0.0.2", 00:09:43.626 "adrfam": "ipv4", 00:09:43.626 "trsvcid": "4420", 00:09:43.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:43.626 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:43.626 "hdgst": false, 00:09:43.626 "ddgst": false 00:09:43.626 }, 00:09:43.626 "method": "bdev_nvme_attach_controller" 00:09:43.626 }' 00:09:43.626 [2024-11-28 07:19:05.744736] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:43.626 [2024-11-28 07:19:05.744828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72466 ] 00:09:43.626 [2024-11-28 07:19:05.880764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.884 [2024-11-28 07:19:05.970232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.884 Running I/O for 1 seconds... 00:09:45.259 00:09:45.259 Latency(us) 00:09:45.259 [2024-11-28T07:19:07.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.259 [2024-11-28T07:19:07.534Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:45.259 Verification LBA range: start 0x0 length 0x400 00:09:45.259 Nvme0n1 : 1.01 2794.76 174.67 0.00 0.00 22534.03 1683.08 27763.43 00:09:45.259 [2024-11-28T07:19:07.534Z] =================================================================================================================== 00:09:45.259 [2024-11-28T07:19:07.534Z] Total : 2794.76 174.67 0.00 0.00 22534.03 1683.08 27763.43 00:09:45.259 07:19:07 -- target/host_management.sh@101 -- # stoptarget 00:09:45.259 07:19:07 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:45.259 07:19:07 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:45.259 07:19:07 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:45.259 07:19:07 -- target/host_management.sh@40 -- # nvmftestfini 00:09:45.259 07:19:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:45.259 07:19:07 -- nvmf/common.sh@116 -- # sync 00:09:45.259 07:19:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:45.259 07:19:07 -- nvmf/common.sh@119 -- # set +e 00:09:45.259 07:19:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:45.259 07:19:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:45.259 rmmod nvme_tcp 00:09:45.259 rmmod nvme_fabrics 00:09:45.259 rmmod nvme_keyring 00:09:45.260 07:19:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:45.260 07:19:07 -- nvmf/common.sh@123 -- # set -e 00:09:45.260 07:19:07 -- nvmf/common.sh@124 -- # return 0 00:09:45.260 07:19:07 -- nvmf/common.sh@477 -- # '[' -n 72368 ']' 00:09:45.260 07:19:07 -- nvmf/common.sh@478 -- # killprocess 72368 00:09:45.260 07:19:07 -- common/autotest_common.sh@936 -- # '[' -z 72368 ']' 00:09:45.260 07:19:07 -- common/autotest_common.sh@940 -- # kill -0 72368 00:09:45.260 07:19:07 -- common/autotest_common.sh@941 -- # uname 00:09:45.260 07:19:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:45.260 07:19:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72368 00:09:45.260 killing process with pid 72368 00:09:45.260 07:19:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:45.260 07:19:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:45.260 07:19:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72368' 00:09:45.260 07:19:07 -- common/autotest_common.sh@955 -- # kill 72368 00:09:45.260 07:19:07 -- common/autotest_common.sh@960 -- # wait 72368 00:09:45.517 [2024-11-28 07:19:07.739173] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:45.517 07:19:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:45.517 07:19:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:45.517 07:19:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:45.517 07:19:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.517 07:19:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:45.517 07:19:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.517 07:19:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.517 07:19:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.776 07:19:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:45.776 00:09:45.776 real 0m5.583s 00:09:45.776 user 0m23.660s 00:09:45.776 sys 0m1.311s 00:09:45.776 ************************************ 00:09:45.776 END TEST nvmf_host_management 00:09:45.776 ************************************ 00:09:45.776 07:19:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:45.776 07:19:07 -- common/autotest_common.sh@10 -- # set +x 00:09:45.776 07:19:07 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:09:45.776 00:09:45.776 real 0m6.312s 00:09:45.776 user 0m23.911s 00:09:45.776 sys 0m1.576s 00:09:45.776 07:19:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:45.776 07:19:07 -- common/autotest_common.sh@10 -- # set +x 00:09:45.776 ************************************ 00:09:45.776 END TEST nvmf_host_management 00:09:45.776 ************************************ 00:09:45.776 07:19:07 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:45.776 07:19:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:45.776 07:19:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.776 07:19:07 -- common/autotest_common.sh@10 -- # set +x 00:09:45.776 ************************************ 00:09:45.776 START TEST nvmf_lvol 00:09:45.776 ************************************ 00:09:45.776 07:19:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:45.776 * Looking for test storage... 00:09:45.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:45.776 07:19:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:45.776 07:19:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:45.776 07:19:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:45.776 07:19:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:45.776 07:19:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:45.776 07:19:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:45.776 07:19:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:45.776 07:19:08 -- scripts/common.sh@335 -- # IFS=.-: 00:09:45.776 07:19:08 -- scripts/common.sh@335 -- # read -ra ver1 00:09:45.776 07:19:08 -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.776 07:19:08 -- scripts/common.sh@336 -- # read -ra ver2 00:09:45.776 07:19:08 -- scripts/common.sh@337 -- # local 'op=<' 00:09:45.776 07:19:08 -- scripts/common.sh@339 -- # ver1_l=2 00:09:45.776 07:19:08 -- scripts/common.sh@340 -- # ver2_l=1 00:09:45.776 07:19:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:45.776 07:19:08 -- scripts/common.sh@343 -- # case "$op" in 00:09:45.776 07:19:08 -- scripts/common.sh@344 -- # : 1 00:09:45.776 07:19:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:45.776 07:19:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.776 07:19:08 -- scripts/common.sh@364 -- # decimal 1 00:09:45.776 07:19:08 -- scripts/common.sh@352 -- # local d=1 00:09:45.776 07:19:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.776 07:19:08 -- scripts/common.sh@354 -- # echo 1 00:09:45.776 07:19:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:45.776 07:19:08 -- scripts/common.sh@365 -- # decimal 2 00:09:45.776 07:19:08 -- scripts/common.sh@352 -- # local d=2 00:09:45.776 07:19:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.776 07:19:08 -- scripts/common.sh@354 -- # echo 2 00:09:45.776 07:19:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:45.776 07:19:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:45.776 07:19:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:45.776 07:19:08 -- scripts/common.sh@367 -- # return 0 00:09:45.776 07:19:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.776 07:19:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:45.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.776 --rc genhtml_branch_coverage=1 00:09:45.776 --rc genhtml_function_coverage=1 00:09:45.776 --rc genhtml_legend=1 00:09:45.776 --rc geninfo_all_blocks=1 00:09:45.776 --rc geninfo_unexecuted_blocks=1 00:09:45.776 00:09:45.776 ' 00:09:45.776 07:19:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:45.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.776 --rc genhtml_branch_coverage=1 00:09:45.776 --rc genhtml_function_coverage=1 00:09:45.776 --rc genhtml_legend=1 00:09:45.776 --rc geninfo_all_blocks=1 00:09:45.776 --rc geninfo_unexecuted_blocks=1 00:09:45.776 00:09:45.776 ' 00:09:45.776 07:19:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:45.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.776 --rc genhtml_branch_coverage=1 00:09:45.776 --rc genhtml_function_coverage=1 00:09:45.776 --rc genhtml_legend=1 00:09:45.776 --rc geninfo_all_blocks=1 00:09:45.776 --rc geninfo_unexecuted_blocks=1 00:09:45.776 00:09:45.776 ' 00:09:45.776 07:19:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:45.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.776 --rc genhtml_branch_coverage=1 00:09:45.776 --rc genhtml_function_coverage=1 00:09:45.776 --rc genhtml_legend=1 00:09:45.776 --rc geninfo_all_blocks=1 00:09:45.776 --rc geninfo_unexecuted_blocks=1 00:09:45.776 00:09:45.776 ' 00:09:45.776 07:19:08 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.035 07:19:08 -- nvmf/common.sh@7 -- # uname -s 00:09:46.035 07:19:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.035 07:19:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.035 07:19:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.035 07:19:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.035 07:19:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.035 07:19:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.035 07:19:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.035 07:19:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.035 07:19:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.035 07:19:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.035 07:19:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:09:46.035 07:19:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:09:46.035 07:19:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.035 07:19:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.035 07:19:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:46.035 07:19:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.035 07:19:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.035 07:19:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.035 07:19:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.035 07:19:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.035 07:19:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.035 07:19:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.035 07:19:08 -- paths/export.sh@5 -- # export PATH 00:09:46.035 07:19:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.035 07:19:08 -- nvmf/common.sh@46 -- # : 0 00:09:46.035 07:19:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:46.036 07:19:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:46.036 07:19:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:46.036 07:19:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.036 07:19:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.036 07:19:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:46.036 07:19:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:46.036 07:19:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:46.036 07:19:08 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.036 07:19:08 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.036 07:19:08 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:46.036 07:19:08 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:46.036 07:19:08 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.036 07:19:08 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:46.036 07:19:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:46.036 07:19:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.036 07:19:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:46.036 07:19:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:46.036 07:19:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:46.036 07:19:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.036 07:19:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.036 07:19:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.036 07:19:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:46.036 07:19:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:46.036 07:19:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:46.036 07:19:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:46.036 07:19:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:46.036 07:19:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:46.036 07:19:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.036 07:19:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.036 07:19:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:46.036 07:19:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:46.036 07:19:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:46.036 07:19:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:46.036 07:19:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:46.036 07:19:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.036 07:19:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:46.036 07:19:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:46.036 07:19:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:46.036 07:19:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:46.036 07:19:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:46.036 07:19:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:46.036 Cannot find device "nvmf_tgt_br" 00:09:46.036 07:19:08 -- nvmf/common.sh@154 -- # true 00:09:46.036 07:19:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.036 Cannot find device "nvmf_tgt_br2" 00:09:46.036 07:19:08 -- nvmf/common.sh@155 -- # true 00:09:46.036 07:19:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:46.036 07:19:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:46.036 Cannot find device "nvmf_tgt_br" 00:09:46.036 07:19:08 -- nvmf/common.sh@157 -- # true 00:09:46.036 07:19:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:46.036 Cannot find device "nvmf_tgt_br2" 00:09:46.036 07:19:08 -- nvmf/common.sh@158 -- # true 00:09:46.036 07:19:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:46.036 07:19:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:46.036 07:19:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.036 07:19:08 -- nvmf/common.sh@161 -- # true 00:09:46.036 07:19:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.036 07:19:08 -- nvmf/common.sh@162 -- # true 00:09:46.036 07:19:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:46.036 07:19:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:46.036 07:19:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:46.036 07:19:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:46.036 07:19:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:46.036 07:19:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:46.036 07:19:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:46.036 07:19:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:46.036 07:19:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:46.036 07:19:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:46.036 07:19:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:46.296 07:19:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:46.296 07:19:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:46.296 07:19:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:46.296 07:19:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:46.296 07:19:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:46.296 07:19:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:46.296 07:19:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:46.296 07:19:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:46.296 07:19:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:46.296 07:19:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:46.296 07:19:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:46.296 07:19:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:46.296 07:19:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:46.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:09:46.296 00:09:46.296 --- 10.0.0.2 ping statistics --- 00:09:46.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.296 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:46.296 07:19:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:46.296 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:46.296 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:09:46.296 00:09:46.296 --- 10.0.0.3 ping statistics --- 00:09:46.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.296 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:46.296 07:19:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:46.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:46.296 00:09:46.296 --- 10.0.0.1 ping statistics --- 00:09:46.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.296 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:46.296 07:19:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.296 07:19:08 -- nvmf/common.sh@421 -- # return 0 00:09:46.296 07:19:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:46.296 07:19:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.296 07:19:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:46.296 07:19:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:46.296 07:19:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.296 07:19:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:46.296 07:19:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:46.296 07:19:08 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:46.296 07:19:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:46.296 07:19:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:46.296 07:19:08 -- common/autotest_common.sh@10 -- # set +x 00:09:46.296 07:19:08 -- nvmf/common.sh@469 -- # nvmfpid=72700 00:09:46.296 07:19:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:46.296 07:19:08 -- nvmf/common.sh@470 -- # waitforlisten 72700 00:09:46.296 07:19:08 -- common/autotest_common.sh@829 -- # '[' -z 72700 ']' 00:09:46.296 07:19:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.296 07:19:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:46.296 07:19:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.296 07:19:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:46.296 07:19:08 -- common/autotest_common.sh@10 -- # set +x 00:09:46.296 [2024-11-28 07:19:08.482517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:46.296 [2024-11-28 07:19:08.482635] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.554 [2024-11-28 07:19:08.627575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:46.554 [2024-11-28 07:19:08.724437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:46.554 [2024-11-28 07:19:08.724606] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.554 [2024-11-28 07:19:08.724622] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.554 [2024-11-28 07:19:08.724633] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.554 [2024-11-28 07:19:08.724747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.554 [2024-11-28 07:19:08.725125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.554 [2024-11-28 07:19:08.725139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.490 07:19:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:47.490 07:19:09 -- common/autotest_common.sh@862 -- # return 0 00:09:47.490 07:19:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:47.490 07:19:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:47.490 07:19:09 -- common/autotest_common.sh@10 -- # set +x 00:09:47.490 07:19:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.490 07:19:09 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:47.490 [2024-11-28 07:19:09.671810] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.490 07:19:09 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.057 07:19:10 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:48.057 07:19:10 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.315 07:19:10 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:48.315 07:19:10 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:48.574 07:19:10 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:48.832 07:19:10 -- target/nvmf_lvol.sh@29 -- # lvs=7ce4cefc-a9cb-445a-a7d8-468622d48aa5 00:09:48.832 07:19:10 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7ce4cefc-a9cb-445a-a7d8-468622d48aa5 lvol 20 00:09:49.090 07:19:11 -- target/nvmf_lvol.sh@32 -- # lvol=251f280d-2d59-4dd7-a98d-419e59974b46 00:09:49.090 07:19:11 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:49.348 07:19:11 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 251f280d-2d59-4dd7-a98d-419e59974b46 00:09:49.607 07:19:11 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:49.865 [2024-11-28 07:19:11.970721] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.865 07:19:11 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:50.124 07:19:12 -- target/nvmf_lvol.sh@42 -- # perf_pid=72781 00:09:50.124 07:19:12 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:50.124 07:19:12 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:51.059 07:19:13 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 251f280d-2d59-4dd7-a98d-419e59974b46 MY_SNAPSHOT 00:09:51.625 07:19:13 -- target/nvmf_lvol.sh@47 -- # snapshot=994ea3d7-e3d0-4008-8768-8ee1290757b5 00:09:51.625 07:19:13 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 251f280d-2d59-4dd7-a98d-419e59974b46 30 00:09:51.884 07:19:13 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 994ea3d7-e3d0-4008-8768-8ee1290757b5 MY_CLONE 00:09:52.142 07:19:14 -- target/nvmf_lvol.sh@49 -- # clone=446512d4-0115-447c-bd96-9896adafb639 00:09:52.142 07:19:14 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 446512d4-0115-447c-bd96-9896adafb639 00:09:52.710 07:19:14 -- target/nvmf_lvol.sh@53 -- # wait 72781 00:10:00.845 Initializing NVMe Controllers 00:10:00.845 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:00.845 Controller IO queue size 128, less than required. 00:10:00.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:00.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:00.845 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:00.845 Initialization complete. Launching workers. 00:10:00.845 ======================================================== 00:10:00.845 Latency(us) 00:10:00.845 Device Information : IOPS MiB/s Average min max 00:10:00.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8090.19 31.60 15835.74 1955.00 66479.15 00:10:00.845 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8396.69 32.80 15250.92 2897.04 107587.77 00:10:00.845 ======================================================== 00:10:00.845 Total : 16486.89 64.40 15537.89 1955.00 107587.77 00:10:00.845 00:10:00.845 07:19:22 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:00.845 07:19:22 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 251f280d-2d59-4dd7-a98d-419e59974b46 00:10:00.845 07:19:23 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7ce4cefc-a9cb-445a-a7d8-468622d48aa5 00:10:01.411 07:19:23 -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:01.411 07:19:23 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:01.411 07:19:23 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:01.411 07:19:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:01.411 07:19:23 -- nvmf/common.sh@116 -- # sync 00:10:01.411 07:19:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:01.411 07:19:23 -- nvmf/common.sh@119 -- # set +e 00:10:01.411 07:19:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:01.411 07:19:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:01.411 rmmod nvme_tcp 00:10:01.411 rmmod nvme_fabrics 00:10:01.411 rmmod nvme_keyring 00:10:01.411 07:19:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:01.411 07:19:23 -- nvmf/common.sh@123 -- # set -e 00:10:01.411 07:19:23 -- nvmf/common.sh@124 -- # return 0 00:10:01.411 07:19:23 -- nvmf/common.sh@477 -- # '[' -n 72700 ']' 00:10:01.411 07:19:23 -- nvmf/common.sh@478 -- # killprocess 72700 00:10:01.411 07:19:23 -- common/autotest_common.sh@936 -- # '[' -z 72700 ']' 00:10:01.411 07:19:23 -- common/autotest_common.sh@940 -- # kill -0 72700 00:10:01.411 07:19:23 -- common/autotest_common.sh@941 -- # uname 00:10:01.411 07:19:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:01.411 07:19:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72700 00:10:01.411 killing process with pid 72700 00:10:01.411 07:19:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:01.411 07:19:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:01.411 07:19:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72700' 00:10:01.411 07:19:23 -- common/autotest_common.sh@955 -- # kill 72700 00:10:01.411 07:19:23 -- common/autotest_common.sh@960 -- # wait 72700 00:10:01.670 07:19:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:01.670 07:19:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:01.670 07:19:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:01.670 07:19:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:01.670 07:19:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:01.670 07:19:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.670 07:19:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:01.670 07:19:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.670 07:19:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:01.670 ************************************ 00:10:01.670 END TEST nvmf_lvol 00:10:01.670 ************************************ 00:10:01.670 00:10:01.670 real 0m15.973s 00:10:01.670 user 1m5.658s 00:10:01.670 sys 0m4.952s 00:10:01.670 07:19:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:01.670 07:19:23 -- common/autotest_common.sh@10 -- # set +x 00:10:01.670 07:19:23 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:01.670 07:19:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:01.670 07:19:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:01.670 07:19:23 -- common/autotest_common.sh@10 -- # set +x 00:10:01.670 ************************************ 00:10:01.670 START TEST nvmf_lvs_grow 00:10:01.670 ************************************ 00:10:01.670 07:19:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:01.929 * Looking for test storage... 00:10:01.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:01.929 07:19:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:01.929 07:19:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:01.929 07:19:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:01.929 07:19:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:01.929 07:19:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:01.929 07:19:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:01.929 07:19:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:01.929 07:19:24 -- scripts/common.sh@335 -- # IFS=.-: 00:10:01.929 07:19:24 -- scripts/common.sh@335 -- # read -ra ver1 00:10:01.929 07:19:24 -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.929 07:19:24 -- scripts/common.sh@336 -- # read -ra ver2 00:10:01.929 07:19:24 -- scripts/common.sh@337 -- # local 'op=<' 00:10:01.929 07:19:24 -- scripts/common.sh@339 -- # ver1_l=2 00:10:01.929 07:19:24 -- scripts/common.sh@340 -- # ver2_l=1 00:10:01.929 07:19:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:01.929 07:19:24 -- scripts/common.sh@343 -- # case "$op" in 00:10:01.929 07:19:24 -- scripts/common.sh@344 -- # : 1 00:10:01.929 07:19:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:01.929 07:19:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.929 07:19:24 -- scripts/common.sh@364 -- # decimal 1 00:10:01.929 07:19:24 -- scripts/common.sh@352 -- # local d=1 00:10:01.929 07:19:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.929 07:19:24 -- scripts/common.sh@354 -- # echo 1 00:10:01.929 07:19:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:01.929 07:19:24 -- scripts/common.sh@365 -- # decimal 2 00:10:01.929 07:19:24 -- scripts/common.sh@352 -- # local d=2 00:10:01.929 07:19:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.929 07:19:24 -- scripts/common.sh@354 -- # echo 2 00:10:01.929 07:19:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:01.929 07:19:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:01.929 07:19:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:01.929 07:19:24 -- scripts/common.sh@367 -- # return 0 00:10:01.929 07:19:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.929 07:19:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:01.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.929 --rc genhtml_branch_coverage=1 00:10:01.929 --rc genhtml_function_coverage=1 00:10:01.929 --rc genhtml_legend=1 00:10:01.929 --rc geninfo_all_blocks=1 00:10:01.929 --rc geninfo_unexecuted_blocks=1 00:10:01.929 00:10:01.929 ' 00:10:01.929 07:19:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:01.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.929 --rc genhtml_branch_coverage=1 00:10:01.929 --rc genhtml_function_coverage=1 00:10:01.929 --rc genhtml_legend=1 00:10:01.929 --rc geninfo_all_blocks=1 00:10:01.929 --rc geninfo_unexecuted_blocks=1 00:10:01.929 00:10:01.929 ' 00:10:01.929 07:19:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:01.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.929 --rc genhtml_branch_coverage=1 00:10:01.929 --rc genhtml_function_coverage=1 00:10:01.929 --rc genhtml_legend=1 00:10:01.929 --rc geninfo_all_blocks=1 00:10:01.929 --rc geninfo_unexecuted_blocks=1 00:10:01.929 00:10:01.929 ' 00:10:01.929 07:19:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:01.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.929 --rc genhtml_branch_coverage=1 00:10:01.929 --rc genhtml_function_coverage=1 00:10:01.929 --rc genhtml_legend=1 00:10:01.929 --rc geninfo_all_blocks=1 00:10:01.929 --rc geninfo_unexecuted_blocks=1 00:10:01.929 00:10:01.929 ' 00:10:01.929 07:19:24 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:01.929 07:19:24 -- nvmf/common.sh@7 -- # uname -s 00:10:01.929 07:19:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.929 07:19:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.929 07:19:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.929 07:19:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.929 07:19:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.929 07:19:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.929 07:19:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.929 07:19:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.929 07:19:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.929 07:19:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.929 07:19:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:10:01.929 07:19:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:10:01.929 07:19:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.929 07:19:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.929 07:19:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:01.929 07:19:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:01.929 07:19:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.929 07:19:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.929 07:19:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.930 07:19:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.930 07:19:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.930 07:19:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.930 07:19:24 -- paths/export.sh@5 -- # export PATH 00:10:01.930 07:19:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.930 07:19:24 -- nvmf/common.sh@46 -- # : 0 00:10:01.930 07:19:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:01.930 07:19:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:01.930 07:19:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:01.930 07:19:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.930 07:19:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.930 07:19:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:01.930 07:19:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:01.930 07:19:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:01.930 07:19:24 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.930 07:19:24 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:01.930 07:19:24 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:10:01.930 07:19:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:01.930 07:19:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.930 07:19:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:01.930 07:19:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:01.930 07:19:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:01.930 07:19:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.930 07:19:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:01.930 07:19:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.930 07:19:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:01.930 07:19:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:01.930 07:19:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:01.930 07:19:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:01.930 07:19:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:01.930 07:19:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:01.930 07:19:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.930 07:19:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.930 07:19:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:01.930 07:19:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:01.930 07:19:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:01.930 07:19:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:01.930 07:19:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:01.930 07:19:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.930 07:19:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:01.930 07:19:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:01.930 07:19:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:01.930 07:19:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:01.930 07:19:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:01.930 07:19:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:01.930 Cannot find device "nvmf_tgt_br" 00:10:01.930 07:19:24 -- nvmf/common.sh@154 -- # true 00:10:01.930 07:19:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:01.930 Cannot find device "nvmf_tgt_br2" 00:10:01.930 07:19:24 -- nvmf/common.sh@155 -- # true 00:10:01.930 07:19:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:01.930 07:19:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:01.930 Cannot find device "nvmf_tgt_br" 00:10:01.930 07:19:24 -- nvmf/common.sh@157 -- # true 00:10:01.930 07:19:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:01.930 Cannot find device "nvmf_tgt_br2" 00:10:01.930 07:19:24 -- nvmf/common.sh@158 -- # true 00:10:01.930 07:19:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:02.190 07:19:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:02.190 07:19:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:02.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:02.190 07:19:24 -- nvmf/common.sh@161 -- # true 00:10:02.190 07:19:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:02.190 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:02.190 07:19:24 -- nvmf/common.sh@162 -- # true 00:10:02.190 07:19:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:02.190 07:19:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:02.190 07:19:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:02.190 07:19:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:02.190 07:19:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:02.190 07:19:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:02.190 07:19:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:02.190 07:19:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:02.190 07:19:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:02.190 07:19:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:02.190 07:19:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:02.190 07:19:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:02.190 07:19:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:02.190 07:19:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:02.190 07:19:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:02.190 07:19:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:02.190 07:19:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:02.190 07:19:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:02.190 07:19:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:02.190 07:19:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:02.190 07:19:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:02.190 07:19:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:02.190 07:19:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:02.190 07:19:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:02.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:10:02.190 00:10:02.190 --- 10.0.0.2 ping statistics --- 00:10:02.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.190 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:10:02.190 07:19:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:02.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:02.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:10:02.190 00:10:02.190 --- 10.0.0.3 ping statistics --- 00:10:02.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.190 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:02.190 07:19:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:02.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:10:02.190 00:10:02.190 --- 10.0.0.1 ping statistics --- 00:10:02.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.190 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:02.190 07:19:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.190 07:19:24 -- nvmf/common.sh@421 -- # return 0 00:10:02.190 07:19:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:02.190 07:19:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.190 07:19:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:02.190 07:19:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:02.190 07:19:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.190 07:19:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:02.190 07:19:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:02.449 07:19:24 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:10:02.449 07:19:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:02.449 07:19:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:02.449 07:19:24 -- common/autotest_common.sh@10 -- # set +x 00:10:02.449 07:19:24 -- nvmf/common.sh@469 -- # nvmfpid=73112 00:10:02.449 07:19:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:02.449 07:19:24 -- nvmf/common.sh@470 -- # waitforlisten 73112 00:10:02.449 07:19:24 -- common/autotest_common.sh@829 -- # '[' -z 73112 ']' 00:10:02.449 07:19:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.449 07:19:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.449 07:19:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.449 07:19:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.449 07:19:24 -- common/autotest_common.sh@10 -- # set +x 00:10:02.449 [2024-11-28 07:19:24.534765] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:02.449 [2024-11-28 07:19:24.535117] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.449 [2024-11-28 07:19:24.679197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.708 [2024-11-28 07:19:24.776536] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:02.708 [2024-11-28 07:19:24.777018] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.708 [2024-11-28 07:19:24.777055] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.708 [2024-11-28 07:19:24.777072] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.708 [2024-11-28 07:19:24.777115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.279 07:19:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.279 07:19:25 -- common/autotest_common.sh@862 -- # return 0 00:10:03.279 07:19:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:03.279 07:19:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.279 07:19:25 -- common/autotest_common.sh@10 -- # set +x 00:10:03.279 07:19:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.279 07:19:25 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:03.537 [2024-11-28 07:19:25.806470] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.795 07:19:25 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:10:03.795 07:19:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:03.795 07:19:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:03.795 07:19:25 -- common/autotest_common.sh@10 -- # set +x 00:10:03.795 ************************************ 00:10:03.795 START TEST lvs_grow_clean 00:10:03.795 ************************************ 00:10:03.795 07:19:25 -- common/autotest_common.sh@1114 -- # lvs_grow 00:10:03.795 07:19:25 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:03.795 07:19:25 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:03.795 07:19:25 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:03.795 07:19:25 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:03.795 07:19:25 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:03.795 07:19:25 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:03.795 07:19:25 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.795 07:19:25 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:03.795 07:19:25 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:04.054 07:19:26 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:04.054 07:19:26 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:04.314 07:19:26 -- target/nvmf_lvs_grow.sh@28 -- # lvs=981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:04.314 07:19:26 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:04.314 07:19:26 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:04.574 07:19:26 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:04.574 07:19:26 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:04.574 07:19:26 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 lvol 150 00:10:04.834 07:19:26 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b3c7b0d1-5f43-48ac-a124-831323666a57 00:10:04.834 07:19:26 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:04.834 07:19:26 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:05.092 [2024-11-28 07:19:27.199417] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:05.092 [2024-11-28 07:19:27.199607] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:05.092 true 00:10:05.092 07:19:27 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:05.092 07:19:27 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:05.351 07:19:27 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:05.351 07:19:27 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:05.610 07:19:27 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b3c7b0d1-5f43-48ac-a124-831323666a57 00:10:05.869 07:19:28 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:06.128 [2024-11-28 07:19:28.337116] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.128 07:19:28 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:06.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:06.387 07:19:28 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73200 00:10:06.387 07:19:28 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:06.387 07:19:28 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:06.387 07:19:28 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73200 /var/tmp/bdevperf.sock 00:10:06.387 07:19:28 -- common/autotest_common.sh@829 -- # '[' -z 73200 ']' 00:10:06.387 07:19:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:06.387 07:19:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.387 07:19:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:06.387 07:19:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.387 07:19:28 -- common/autotest_common.sh@10 -- # set +x 00:10:06.387 [2024-11-28 07:19:28.656825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:06.387 [2024-11-28 07:19:28.656938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73200 ] 00:10:06.645 [2024-11-28 07:19:28.792638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.646 [2024-11-28 07:19:28.884676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.581 07:19:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.582 07:19:29 -- common/autotest_common.sh@862 -- # return 0 00:10:07.582 07:19:29 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:07.840 Nvme0n1 00:10:07.840 07:19:29 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:08.099 [ 00:10:08.099 { 00:10:08.099 "name": "Nvme0n1", 00:10:08.099 "aliases": [ 00:10:08.099 "b3c7b0d1-5f43-48ac-a124-831323666a57" 00:10:08.099 ], 00:10:08.099 "product_name": "NVMe disk", 00:10:08.099 "block_size": 4096, 00:10:08.099 "num_blocks": 38912, 00:10:08.099 "uuid": "b3c7b0d1-5f43-48ac-a124-831323666a57", 00:10:08.099 "assigned_rate_limits": { 00:10:08.099 "rw_ios_per_sec": 0, 00:10:08.099 "rw_mbytes_per_sec": 0, 00:10:08.099 "r_mbytes_per_sec": 0, 00:10:08.099 "w_mbytes_per_sec": 0 00:10:08.099 }, 00:10:08.099 "claimed": false, 00:10:08.099 "zoned": false, 00:10:08.099 "supported_io_types": { 00:10:08.099 "read": true, 00:10:08.099 "write": true, 00:10:08.099 "unmap": true, 00:10:08.099 "write_zeroes": true, 00:10:08.099 "flush": true, 00:10:08.099 "reset": true, 00:10:08.099 "compare": true, 00:10:08.099 "compare_and_write": true, 00:10:08.099 "abort": true, 00:10:08.099 "nvme_admin": true, 00:10:08.099 "nvme_io": true 00:10:08.099 }, 00:10:08.099 "driver_specific": { 00:10:08.099 "nvme": [ 00:10:08.099 { 00:10:08.099 "trid": { 00:10:08.099 "trtype": "TCP", 00:10:08.099 "adrfam": "IPv4", 00:10:08.099 "traddr": "10.0.0.2", 00:10:08.099 "trsvcid": "4420", 00:10:08.099 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:08.099 }, 00:10:08.099 "ctrlr_data": { 00:10:08.099 "cntlid": 1, 00:10:08.099 "vendor_id": "0x8086", 00:10:08.099 "model_number": "SPDK bdev Controller", 00:10:08.099 "serial_number": "SPDK0", 00:10:08.099 "firmware_revision": "24.01.1", 00:10:08.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:08.099 "oacs": { 00:10:08.099 "security": 0, 00:10:08.099 "format": 0, 00:10:08.099 "firmware": 0, 00:10:08.099 "ns_manage": 0 00:10:08.099 }, 00:10:08.099 "multi_ctrlr": true, 00:10:08.099 "ana_reporting": false 00:10:08.099 }, 00:10:08.099 "vs": { 00:10:08.099 "nvme_version": "1.3" 00:10:08.099 }, 00:10:08.099 "ns_data": { 00:10:08.099 "id": 1, 00:10:08.099 "can_share": true 00:10:08.099 } 00:10:08.099 } 00:10:08.099 ], 00:10:08.099 "mp_policy": "active_passive" 00:10:08.099 } 00:10:08.099 } 00:10:08.099 ] 00:10:08.099 07:19:30 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73224 00:10:08.099 07:19:30 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:08.099 07:19:30 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:08.358 Running I/O for 10 seconds... 00:10:09.294 Latency(us) 00:10:09.294 [2024-11-28T07:19:31.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.294 [2024-11-28T07:19:31.569Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.294 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:10:09.294 [2024-11-28T07:19:31.569Z] =================================================================================================================== 00:10:09.294 [2024-11-28T07:19:31.569Z] Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:10:09.294 00:10:10.229 07:19:32 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:10.229 [2024-11-28T07:19:32.504Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.229 Nvme0n1 : 2.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:10:10.229 [2024-11-28T07:19:32.504Z] =================================================================================================================== 00:10:10.229 [2024-11-28T07:19:32.504Z] Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:10:10.229 00:10:10.487 true 00:10:10.487 07:19:32 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:10.487 07:19:32 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:10.745 07:19:32 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:10.745 07:19:32 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:10.745 07:19:32 -- target/nvmf_lvs_grow.sh@65 -- # wait 73224 00:10:11.312 [2024-11-28T07:19:33.587Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.312 Nvme0n1 : 3.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:10:11.312 [2024-11-28T07:19:33.587Z] =================================================================================================================== 00:10:11.312 [2024-11-28T07:19:33.587Z] Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:10:11.312 00:10:12.245 [2024-11-28T07:19:34.520Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.245 Nvme0n1 : 4.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:10:12.245 [2024-11-28T07:19:34.520Z] =================================================================================================================== 00:10:12.245 [2024-11-28T07:19:34.520Z] Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:10:12.245 00:10:13.178 [2024-11-28T07:19:35.453Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.178 Nvme0n1 : 5.00 7264.40 28.38 0.00 0.00 0.00 0.00 0.00 00:10:13.178 [2024-11-28T07:19:35.453Z] =================================================================================================================== 00:10:13.178 [2024-11-28T07:19:35.453Z] Total : 7264.40 28.38 0.00 0.00 0.00 0.00 0.00 00:10:13.178 00:10:14.553 [2024-11-28T07:19:36.828Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.553 Nvme0n1 : 6.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:10:14.553 [2024-11-28T07:19:36.828Z] =================================================================================================================== 00:10:14.553 [2024-11-28T07:19:36.828Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:10:14.553 00:10:15.486 [2024-11-28T07:19:37.761Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.486 Nvme0n1 : 7.00 7220.86 28.21 0.00 0.00 0.00 0.00 0.00 00:10:15.486 [2024-11-28T07:19:37.761Z] =================================================================================================================== 00:10:15.486 [2024-11-28T07:19:37.761Z] Total : 7220.86 28.21 0.00 0.00 0.00 0.00 0.00 00:10:15.486 00:10:16.422 [2024-11-28T07:19:38.697Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.422 Nvme0n1 : 8.00 7096.12 27.72 0.00 0.00 0.00 0.00 0.00 00:10:16.422 [2024-11-28T07:19:38.697Z] =================================================================================================================== 00:10:16.422 [2024-11-28T07:19:38.697Z] Total : 7096.12 27.72 0.00 0.00 0.00 0.00 0.00 00:10:16.422 00:10:17.357 [2024-11-28T07:19:39.632Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.357 Nvme0n1 : 9.00 7055.56 27.56 0.00 0.00 0.00 0.00 0.00 00:10:17.357 [2024-11-28T07:19:39.632Z] =================================================================================================================== 00:10:17.357 [2024-11-28T07:19:39.632Z] Total : 7055.56 27.56 0.00 0.00 0.00 0.00 0.00 00:10:17.357 00:10:18.292 [2024-11-28T07:19:40.567Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.292 Nvme0n1 : 10.00 7061.20 27.58 0.00 0.00 0.00 0.00 0.00 00:10:18.292 [2024-11-28T07:19:40.567Z] =================================================================================================================== 00:10:18.292 [2024-11-28T07:19:40.567Z] Total : 7061.20 27.58 0.00 0.00 0.00 0.00 0.00 00:10:18.292 00:10:18.292 00:10:18.292 Latency(us) 00:10:18.292 [2024-11-28T07:19:40.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.292 [2024-11-28T07:19:40.567Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.292 Nvme0n1 : 10.02 7059.74 27.58 0.00 0.00 18126.10 15609.48 118203.11 00:10:18.292 [2024-11-28T07:19:40.567Z] =================================================================================================================== 00:10:18.292 [2024-11-28T07:19:40.567Z] Total : 7059.74 27.58 0.00 0.00 18126.10 15609.48 118203.11 00:10:18.292 0 00:10:18.292 07:19:40 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73200 00:10:18.292 07:19:40 -- common/autotest_common.sh@936 -- # '[' -z 73200 ']' 00:10:18.292 07:19:40 -- common/autotest_common.sh@940 -- # kill -0 73200 00:10:18.292 07:19:40 -- common/autotest_common.sh@941 -- # uname 00:10:18.292 07:19:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:18.292 07:19:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73200 00:10:18.292 killing process with pid 73200 00:10:18.292 Received shutdown signal, test time was about 10.000000 seconds 00:10:18.292 00:10:18.292 Latency(us) 00:10:18.292 [2024-11-28T07:19:40.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.292 [2024-11-28T07:19:40.567Z] =================================================================================================================== 00:10:18.292 [2024-11-28T07:19:40.567Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:18.292 07:19:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:18.292 07:19:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:18.292 07:19:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73200' 00:10:18.292 07:19:40 -- common/autotest_common.sh@955 -- # kill 73200 00:10:18.292 07:19:40 -- common/autotest_common.sh@960 -- # wait 73200 00:10:18.549 07:19:40 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:18.807 07:19:41 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:18.807 07:19:41 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:10:19.065 07:19:41 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:10:19.065 07:19:41 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:10:19.065 07:19:41 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:19.323 [2024-11-28 07:19:41.523984] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:19.323 07:19:41 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:19.323 07:19:41 -- common/autotest_common.sh@650 -- # local es=0 00:10:19.323 07:19:41 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:19.323 07:19:41 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.323 07:19:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.323 07:19:41 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.323 07:19:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.323 07:19:41 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.323 07:19:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.323 07:19:41 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.323 07:19:41 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:19.323 07:19:41 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:19.581 request: 00:10:19.581 { 00:10:19.581 "uuid": "981537b1-27e5-4c86-bc44-5cb2b7a16e45", 00:10:19.581 "method": "bdev_lvol_get_lvstores", 00:10:19.581 "req_id": 1 00:10:19.581 } 00:10:19.581 Got JSON-RPC error response 00:10:19.581 response: 00:10:19.581 { 00:10:19.581 "code": -19, 00:10:19.581 "message": "No such device" 00:10:19.581 } 00:10:19.581 07:19:41 -- common/autotest_common.sh@653 -- # es=1 00:10:19.581 07:19:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:19.581 07:19:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:19.581 07:19:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:19.581 07:19:41 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:19.840 aio_bdev 00:10:19.840 07:19:42 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b3c7b0d1-5f43-48ac-a124-831323666a57 00:10:19.840 07:19:42 -- common/autotest_common.sh@897 -- # local bdev_name=b3c7b0d1-5f43-48ac-a124-831323666a57 00:10:19.840 07:19:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:19.840 07:19:42 -- common/autotest_common.sh@899 -- # local i 00:10:19.840 07:19:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:19.840 07:19:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:19.840 07:19:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:20.098 07:19:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b3c7b0d1-5f43-48ac-a124-831323666a57 -t 2000 00:10:20.356 [ 00:10:20.356 { 00:10:20.356 "name": "b3c7b0d1-5f43-48ac-a124-831323666a57", 00:10:20.356 "aliases": [ 00:10:20.356 "lvs/lvol" 00:10:20.357 ], 00:10:20.357 "product_name": "Logical Volume", 00:10:20.357 "block_size": 4096, 00:10:20.357 "num_blocks": 38912, 00:10:20.357 "uuid": "b3c7b0d1-5f43-48ac-a124-831323666a57", 00:10:20.357 "assigned_rate_limits": { 00:10:20.357 "rw_ios_per_sec": 0, 00:10:20.357 "rw_mbytes_per_sec": 0, 00:10:20.357 "r_mbytes_per_sec": 0, 00:10:20.357 "w_mbytes_per_sec": 0 00:10:20.357 }, 00:10:20.357 "claimed": false, 00:10:20.357 "zoned": false, 00:10:20.357 "supported_io_types": { 00:10:20.357 "read": true, 00:10:20.357 "write": true, 00:10:20.357 "unmap": true, 00:10:20.357 "write_zeroes": true, 00:10:20.357 "flush": false, 00:10:20.357 "reset": true, 00:10:20.357 "compare": false, 00:10:20.357 "compare_and_write": false, 00:10:20.357 "abort": false, 00:10:20.357 "nvme_admin": false, 00:10:20.357 "nvme_io": false 00:10:20.357 }, 00:10:20.357 "driver_specific": { 00:10:20.357 "lvol": { 00:10:20.357 "lvol_store_uuid": "981537b1-27e5-4c86-bc44-5cb2b7a16e45", 00:10:20.357 "base_bdev": "aio_bdev", 00:10:20.357 "thin_provision": false, 00:10:20.357 "snapshot": false, 00:10:20.357 "clone": false, 00:10:20.357 "esnap_clone": false 00:10:20.357 } 00:10:20.357 } 00:10:20.357 } 00:10:20.357 ] 00:10:20.357 07:19:42 -- common/autotest_common.sh@905 -- # return 0 00:10:20.357 07:19:42 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:20.357 07:19:42 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:10:20.614 07:19:42 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:10:20.614 07:19:42 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:10:20.614 07:19:42 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:21.181 07:19:43 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:10:21.181 07:19:43 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b3c7b0d1-5f43-48ac-a124-831323666a57 00:10:21.438 07:19:43 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 981537b1-27e5-4c86-bc44-5cb2b7a16e45 00:10:21.697 07:19:43 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:21.956 07:19:44 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.215 ************************************ 00:10:22.215 END TEST lvs_grow_clean 00:10:22.215 ************************************ 00:10:22.215 00:10:22.215 real 0m18.569s 00:10:22.215 user 0m17.545s 00:10:22.215 sys 0m2.582s 00:10:22.215 07:19:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:22.215 07:19:44 -- common/autotest_common.sh@10 -- # set +x 00:10:22.215 07:19:44 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:22.215 07:19:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:22.215 07:19:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.215 07:19:44 -- common/autotest_common.sh@10 -- # set +x 00:10:22.215 ************************************ 00:10:22.215 START TEST lvs_grow_dirty 00:10:22.215 ************************************ 00:10:22.215 07:19:44 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:10:22.215 07:19:44 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:22.215 07:19:44 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:22.215 07:19:44 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:22.215 07:19:44 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:22.215 07:19:44 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:22.215 07:19:44 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:22.215 07:19:44 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.215 07:19:44 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:22.215 07:19:44 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:22.784 07:19:44 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:22.784 07:19:44 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:22.784 07:19:45 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:22.784 07:19:45 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:22.784 07:19:45 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:23.042 07:19:45 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:23.042 07:19:45 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:23.042 07:19:45 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 lvol 150 00:10:23.608 07:19:45 -- target/nvmf_lvs_grow.sh@33 -- # lvol=0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5 00:10:23.608 07:19:45 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:23.608 07:19:45 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:23.608 [2024-11-28 07:19:45.870292] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:23.608 [2024-11-28 07:19:45.870407] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:23.608 true 00:10:23.868 07:19:45 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:23.868 07:19:45 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:24.126 07:19:46 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:24.126 07:19:46 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:24.385 07:19:46 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5 00:10:24.644 07:19:46 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:24.903 07:19:47 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:25.170 07:19:47 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73469 00:10:25.170 07:19:47 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:25.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:25.170 07:19:47 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:25.170 07:19:47 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73469 /var/tmp/bdevperf.sock 00:10:25.170 07:19:47 -- common/autotest_common.sh@829 -- # '[' -z 73469 ']' 00:10:25.170 07:19:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:25.170 07:19:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.170 07:19:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:25.170 07:19:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.170 07:19:47 -- common/autotest_common.sh@10 -- # set +x 00:10:25.170 [2024-11-28 07:19:47.352537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:25.170 [2024-11-28 07:19:47.352824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73469 ] 00:10:25.433 [2024-11-28 07:19:47.484532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.433 [2024-11-28 07:19:47.590576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.368 07:19:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.368 07:19:48 -- common/autotest_common.sh@862 -- # return 0 00:10:26.368 07:19:48 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:26.368 Nvme0n1 00:10:26.368 07:19:48 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:26.626 [ 00:10:26.626 { 00:10:26.626 "name": "Nvme0n1", 00:10:26.626 "aliases": [ 00:10:26.627 "0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5" 00:10:26.627 ], 00:10:26.627 "product_name": "NVMe disk", 00:10:26.627 "block_size": 4096, 00:10:26.627 "num_blocks": 38912, 00:10:26.627 "uuid": "0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5", 00:10:26.627 "assigned_rate_limits": { 00:10:26.627 "rw_ios_per_sec": 0, 00:10:26.627 "rw_mbytes_per_sec": 0, 00:10:26.627 "r_mbytes_per_sec": 0, 00:10:26.627 "w_mbytes_per_sec": 0 00:10:26.627 }, 00:10:26.627 "claimed": false, 00:10:26.627 "zoned": false, 00:10:26.627 "supported_io_types": { 00:10:26.627 "read": true, 00:10:26.627 "write": true, 00:10:26.627 "unmap": true, 00:10:26.627 "write_zeroes": true, 00:10:26.627 "flush": true, 00:10:26.627 "reset": true, 00:10:26.627 "compare": true, 00:10:26.627 "compare_and_write": true, 00:10:26.627 "abort": true, 00:10:26.627 "nvme_admin": true, 00:10:26.627 "nvme_io": true 00:10:26.627 }, 00:10:26.627 "driver_specific": { 00:10:26.627 "nvme": [ 00:10:26.627 { 00:10:26.627 "trid": { 00:10:26.627 "trtype": "TCP", 00:10:26.627 "adrfam": "IPv4", 00:10:26.627 "traddr": "10.0.0.2", 00:10:26.627 "trsvcid": "4420", 00:10:26.627 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:26.627 }, 00:10:26.627 "ctrlr_data": { 00:10:26.627 "cntlid": 1, 00:10:26.627 "vendor_id": "0x8086", 00:10:26.627 "model_number": "SPDK bdev Controller", 00:10:26.627 "serial_number": "SPDK0", 00:10:26.627 "firmware_revision": "24.01.1", 00:10:26.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:26.627 "oacs": { 00:10:26.627 "security": 0, 00:10:26.627 "format": 0, 00:10:26.627 "firmware": 0, 00:10:26.627 "ns_manage": 0 00:10:26.627 }, 00:10:26.627 "multi_ctrlr": true, 00:10:26.627 "ana_reporting": false 00:10:26.627 }, 00:10:26.627 "vs": { 00:10:26.627 "nvme_version": "1.3" 00:10:26.627 }, 00:10:26.627 "ns_data": { 00:10:26.627 "id": 1, 00:10:26.627 "can_share": true 00:10:26.627 } 00:10:26.627 } 00:10:26.627 ], 00:10:26.627 "mp_policy": "active_passive" 00:10:26.627 } 00:10:26.627 } 00:10:26.627 ] 00:10:26.627 07:19:48 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73493 00:10:26.627 07:19:48 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:26.627 07:19:48 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:26.885 Running I/O for 10 seconds... 00:10:27.820 Latency(us) 00:10:27.820 [2024-11-28T07:19:50.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.820 [2024-11-28T07:19:50.095Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:27.820 Nvme0n1 : 1.00 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:10:27.820 [2024-11-28T07:19:50.095Z] =================================================================================================================== 00:10:27.820 [2024-11-28T07:19:50.095Z] Total : 7620.00 29.77 0.00 0.00 0.00 0.00 0.00 00:10:27.820 00:10:28.755 07:19:50 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:28.755 [2024-11-28T07:19:51.030Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:28.755 Nvme0n1 : 2.00 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:10:28.755 [2024-11-28T07:19:51.030Z] =================================================================================================================== 00:10:28.755 [2024-11-28T07:19:51.030Z] Total : 7556.50 29.52 0.00 0.00 0.00 0.00 0.00 00:10:28.755 00:10:29.013 true 00:10:29.013 07:19:51 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:29.013 07:19:51 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:29.272 07:19:51 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:29.272 07:19:51 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:29.272 07:19:51 -- target/nvmf_lvs_grow.sh@65 -- # wait 73493 00:10:29.837 [2024-11-28T07:19:52.112Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:29.837 Nvme0n1 : 3.00 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:10:29.837 [2024-11-28T07:19:52.112Z] =================================================================================================================== 00:10:29.837 [2024-11-28T07:19:52.112Z] Total : 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:10:29.837 00:10:30.772 [2024-11-28T07:19:53.047Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.772 Nvme0n1 : 4.00 7334.25 28.65 0.00 0.00 0.00 0.00 0.00 00:10:30.772 [2024-11-28T07:19:53.047Z] =================================================================================================================== 00:10:30.772 [2024-11-28T07:19:53.047Z] Total : 7334.25 28.65 0.00 0.00 0.00 0.00 0.00 00:10:30.772 00:10:31.709 [2024-11-28T07:19:53.984Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.709 Nvme0n1 : 5.00 7264.40 28.38 0.00 0.00 0.00 0.00 0.00 00:10:31.709 [2024-11-28T07:19:53.984Z] =================================================================================================================== 00:10:31.709 [2024-11-28T07:19:53.984Z] Total : 7264.40 28.38 0.00 0.00 0.00 0.00 0.00 00:10:31.709 00:10:33.086 [2024-11-28T07:19:55.361Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.086 Nvme0n1 : 6.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:10:33.086 [2024-11-28T07:19:55.361Z] =================================================================================================================== 00:10:33.086 [2024-11-28T07:19:55.361Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:10:33.086 00:10:34.023 [2024-11-28T07:19:56.298Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.023 Nvme0n1 : 7.00 7016.14 27.41 0.00 0.00 0.00 0.00 0.00 00:10:34.023 [2024-11-28T07:19:56.298Z] =================================================================================================================== 00:10:34.023 [2024-11-28T07:19:56.298Z] Total : 7016.14 27.41 0.00 0.00 0.00 0.00 0.00 00:10:34.023 00:10:34.962 [2024-11-28T07:19:57.237Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.962 Nvme0n1 : 8.00 7012.25 27.39 0.00 0.00 0.00 0.00 0.00 00:10:34.962 [2024-11-28T07:19:57.237Z] =================================================================================================================== 00:10:34.962 [2024-11-28T07:19:57.237Z] Total : 7012.25 27.39 0.00 0.00 0.00 0.00 0.00 00:10:34.962 00:10:35.900 [2024-11-28T07:19:58.175Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.900 Nvme0n1 : 9.00 7023.33 27.43 0.00 0.00 0.00 0.00 0.00 00:10:35.900 [2024-11-28T07:19:58.175Z] =================================================================================================================== 00:10:35.900 [2024-11-28T07:19:58.175Z] Total : 7023.33 27.43 0.00 0.00 0.00 0.00 0.00 00:10:35.900 00:10:36.836 [2024-11-28T07:19:59.111Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.836 Nvme0n1 : 10.00 7019.50 27.42 0.00 0.00 0.00 0.00 0.00 00:10:36.836 [2024-11-28T07:19:59.111Z] =================================================================================================================== 00:10:36.836 [2024-11-28T07:19:59.111Z] Total : 7019.50 27.42 0.00 0.00 0.00 0.00 0.00 00:10:36.836 00:10:36.836 00:10:36.836 Latency(us) 00:10:36.836 [2024-11-28T07:19:59.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.836 [2024-11-28T07:19:59.111Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.836 Nvme0n1 : 10.00 7029.78 27.46 0.00 0.00 18202.97 12094.37 209715.20 00:10:36.836 [2024-11-28T07:19:59.111Z] =================================================================================================================== 00:10:36.836 [2024-11-28T07:19:59.111Z] Total : 7029.78 27.46 0.00 0.00 18202.97 12094.37 209715.20 00:10:36.836 0 00:10:36.836 07:19:58 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73469 00:10:36.836 07:19:58 -- common/autotest_common.sh@936 -- # '[' -z 73469 ']' 00:10:36.836 07:19:58 -- common/autotest_common.sh@940 -- # kill -0 73469 00:10:36.836 07:19:58 -- common/autotest_common.sh@941 -- # uname 00:10:36.836 07:19:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:36.837 07:19:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73469 00:10:36.837 killing process with pid 73469 00:10:36.837 Received shutdown signal, test time was about 10.000000 seconds 00:10:36.837 00:10:36.837 Latency(us) 00:10:36.837 [2024-11-28T07:19:59.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.837 [2024-11-28T07:19:59.112Z] =================================================================================================================== 00:10:36.837 [2024-11-28T07:19:59.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:36.837 07:19:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:36.837 07:19:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:36.837 07:19:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73469' 00:10:36.837 07:19:59 -- common/autotest_common.sh@955 -- # kill 73469 00:10:36.837 07:19:59 -- common/autotest_common.sh@960 -- # wait 73469 00:10:37.096 07:19:59 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:37.355 07:19:59 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:37.355 07:19:59 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:10:37.614 07:19:59 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:10:37.614 07:19:59 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:10:37.614 07:19:59 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73112 00:10:37.614 07:19:59 -- target/nvmf_lvs_grow.sh@74 -- # wait 73112 00:10:37.614 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73112 Killed "${NVMF_APP[@]}" "$@" 00:10:37.614 07:19:59 -- target/nvmf_lvs_grow.sh@74 -- # true 00:10:37.614 07:19:59 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:10:37.614 07:19:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:37.614 07:19:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:37.614 07:19:59 -- common/autotest_common.sh@10 -- # set +x 00:10:37.614 07:19:59 -- nvmf/common.sh@469 -- # nvmfpid=73630 00:10:37.614 07:19:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:37.614 07:19:59 -- nvmf/common.sh@470 -- # waitforlisten 73630 00:10:37.614 07:19:59 -- common/autotest_common.sh@829 -- # '[' -z 73630 ']' 00:10:37.614 07:19:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.614 07:19:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.614 07:19:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.614 07:19:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.614 07:19:59 -- common/autotest_common.sh@10 -- # set +x 00:10:37.614 [2024-11-28 07:19:59.861807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:37.614 [2024-11-28 07:19:59.862224] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.873 [2024-11-28 07:20:00.004437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.873 [2024-11-28 07:20:00.097850] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:37.873 [2024-11-28 07:20:00.098004] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.873 [2024-11-28 07:20:00.098018] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.873 [2024-11-28 07:20:00.098026] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.873 [2024-11-28 07:20:00.098053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.808 07:20:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.808 07:20:00 -- common/autotest_common.sh@862 -- # return 0 00:10:38.808 07:20:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:38.808 07:20:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:38.808 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:10:38.808 07:20:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.808 07:20:00 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:39.066 [2024-11-28 07:20:01.201761] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:39.066 [2024-11-28 07:20:01.202090] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:39.066 [2024-11-28 07:20:01.202299] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:39.066 07:20:01 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:10:39.066 07:20:01 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5 00:10:39.066 07:20:01 -- common/autotest_common.sh@897 -- # local bdev_name=0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5 00:10:39.066 07:20:01 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:39.066 07:20:01 -- common/autotest_common.sh@899 -- # local i 00:10:39.066 07:20:01 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:39.066 07:20:01 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:39.066 07:20:01 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:39.323 07:20:01 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5 -t 2000 00:10:39.580 [ 00:10:39.580 { 00:10:39.580 "name": "0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5", 00:10:39.580 "aliases": [ 00:10:39.580 "lvs/lvol" 00:10:39.580 ], 00:10:39.580 "product_name": "Logical Volume", 00:10:39.580 "block_size": 4096, 00:10:39.580 "num_blocks": 38912, 00:10:39.580 "uuid": "0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5", 00:10:39.580 "assigned_rate_limits": { 00:10:39.580 "rw_ios_per_sec": 0, 00:10:39.580 "rw_mbytes_per_sec": 0, 00:10:39.580 "r_mbytes_per_sec": 0, 00:10:39.580 "w_mbytes_per_sec": 0 00:10:39.580 }, 00:10:39.580 "claimed": false, 00:10:39.580 "zoned": false, 00:10:39.580 "supported_io_types": { 00:10:39.580 "read": true, 00:10:39.580 "write": true, 00:10:39.580 "unmap": true, 00:10:39.580 "write_zeroes": true, 00:10:39.580 "flush": false, 00:10:39.580 "reset": true, 00:10:39.580 "compare": false, 00:10:39.580 "compare_and_write": false, 00:10:39.580 "abort": false, 00:10:39.580 "nvme_admin": false, 00:10:39.580 "nvme_io": false 00:10:39.580 }, 00:10:39.580 "driver_specific": { 00:10:39.580 "lvol": { 00:10:39.580 "lvol_store_uuid": "e7133a72-cdff-4f40-b5a4-bf14dc4c5594", 00:10:39.580 "base_bdev": "aio_bdev", 00:10:39.580 "thin_provision": false, 00:10:39.580 "snapshot": false, 00:10:39.580 "clone": false, 00:10:39.580 "esnap_clone": false 00:10:39.580 } 00:10:39.580 } 00:10:39.580 } 00:10:39.580 ] 00:10:39.580 07:20:01 -- common/autotest_common.sh@905 -- # return 0 00:10:39.580 07:20:01 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:10:39.580 07:20:01 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:39.838 07:20:02 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:10:39.838 07:20:02 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:39.838 07:20:02 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:10:40.097 07:20:02 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:10:40.097 07:20:02 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:40.356 [2024-11-28 07:20:02.523208] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:40.356 07:20:02 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:40.356 07:20:02 -- common/autotest_common.sh@650 -- # local es=0 00:10:40.356 07:20:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:40.356 07:20:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:40.356 07:20:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:40.356 07:20:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:40.356 07:20:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:40.356 07:20:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:40.356 07:20:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:40.356 07:20:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:40.356 07:20:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:40.356 07:20:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:40.616 request: 00:10:40.616 { 00:10:40.616 "uuid": "e7133a72-cdff-4f40-b5a4-bf14dc4c5594", 00:10:40.616 "method": "bdev_lvol_get_lvstores", 00:10:40.616 "req_id": 1 00:10:40.616 } 00:10:40.616 Got JSON-RPC error response 00:10:40.616 response: 00:10:40.616 { 00:10:40.616 "code": -19, 00:10:40.616 "message": "No such device" 00:10:40.616 } 00:10:40.616 07:20:02 -- common/autotest_common.sh@653 -- # es=1 00:10:40.616 07:20:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:40.616 07:20:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:40.616 07:20:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:40.616 07:20:02 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:40.875 aio_bdev 00:10:40.875 07:20:03 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5 00:10:40.875 07:20:03 -- common/autotest_common.sh@897 -- # local bdev_name=0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5 00:10:40.875 07:20:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:40.875 07:20:03 -- common/autotest_common.sh@899 -- # local i 00:10:40.875 07:20:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:40.875 07:20:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:40.875 07:20:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:41.198 07:20:03 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5 -t 2000 00:10:41.485 [ 00:10:41.485 { 00:10:41.485 "name": "0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5", 00:10:41.485 "aliases": [ 00:10:41.485 "lvs/lvol" 00:10:41.485 ], 00:10:41.485 "product_name": "Logical Volume", 00:10:41.485 "block_size": 4096, 00:10:41.485 "num_blocks": 38912, 00:10:41.485 "uuid": "0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5", 00:10:41.485 "assigned_rate_limits": { 00:10:41.485 "rw_ios_per_sec": 0, 00:10:41.485 "rw_mbytes_per_sec": 0, 00:10:41.485 "r_mbytes_per_sec": 0, 00:10:41.485 "w_mbytes_per_sec": 0 00:10:41.485 }, 00:10:41.485 "claimed": false, 00:10:41.485 "zoned": false, 00:10:41.485 "supported_io_types": { 00:10:41.485 "read": true, 00:10:41.485 "write": true, 00:10:41.485 "unmap": true, 00:10:41.485 "write_zeroes": true, 00:10:41.485 "flush": false, 00:10:41.485 "reset": true, 00:10:41.485 "compare": false, 00:10:41.485 "compare_and_write": false, 00:10:41.485 "abort": false, 00:10:41.485 "nvme_admin": false, 00:10:41.485 "nvme_io": false 00:10:41.485 }, 00:10:41.485 "driver_specific": { 00:10:41.485 "lvol": { 00:10:41.485 "lvol_store_uuid": "e7133a72-cdff-4f40-b5a4-bf14dc4c5594", 00:10:41.485 "base_bdev": "aio_bdev", 00:10:41.485 "thin_provision": false, 00:10:41.485 "snapshot": false, 00:10:41.485 "clone": false, 00:10:41.485 "esnap_clone": false 00:10:41.485 } 00:10:41.485 } 00:10:41.485 } 00:10:41.485 ] 00:10:41.485 07:20:03 -- common/autotest_common.sh@905 -- # return 0 00:10:41.485 07:20:03 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:41.485 07:20:03 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:10:41.743 07:20:03 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:10:41.743 07:20:03 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:10:41.743 07:20:03 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:42.001 07:20:04 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:10:42.001 07:20:04 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0dfc4763-a8a6-4e2b-b7c8-e2b2a06ab1b5 00:10:42.260 07:20:04 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e7133a72-cdff-4f40-b5a4-bf14dc4c5594 00:10:42.518 07:20:04 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:42.778 07:20:04 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:43.036 ************************************ 00:10:43.036 END TEST lvs_grow_dirty 00:10:43.036 ************************************ 00:10:43.036 00:10:43.036 real 0m20.834s 00:10:43.036 user 0m43.739s 00:10:43.036 sys 0m7.878s 00:10:43.036 07:20:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:43.036 07:20:05 -- common/autotest_common.sh@10 -- # set +x 00:10:43.294 07:20:05 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:43.294 07:20:05 -- common/autotest_common.sh@806 -- # type=--id 00:10:43.294 07:20:05 -- common/autotest_common.sh@807 -- # id=0 00:10:43.294 07:20:05 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:43.294 07:20:05 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:43.294 07:20:05 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:43.294 07:20:05 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:43.294 07:20:05 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:43.294 07:20:05 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:43.294 nvmf_trace.0 00:10:43.294 07:20:05 -- common/autotest_common.sh@821 -- # return 0 00:10:43.294 07:20:05 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:43.294 07:20:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:43.294 07:20:05 -- nvmf/common.sh@116 -- # sync 00:10:43.553 07:20:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:43.553 07:20:05 -- nvmf/common.sh@119 -- # set +e 00:10:43.553 07:20:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:43.553 07:20:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:43.553 rmmod nvme_tcp 00:10:43.812 rmmod nvme_fabrics 00:10:43.812 rmmod nvme_keyring 00:10:43.812 07:20:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:43.812 07:20:05 -- nvmf/common.sh@123 -- # set -e 00:10:43.812 07:20:05 -- nvmf/common.sh@124 -- # return 0 00:10:43.812 07:20:05 -- nvmf/common.sh@477 -- # '[' -n 73630 ']' 00:10:43.812 07:20:05 -- nvmf/common.sh@478 -- # killprocess 73630 00:10:43.812 07:20:05 -- common/autotest_common.sh@936 -- # '[' -z 73630 ']' 00:10:43.812 07:20:05 -- common/autotest_common.sh@940 -- # kill -0 73630 00:10:43.812 07:20:05 -- common/autotest_common.sh@941 -- # uname 00:10:43.812 07:20:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:43.812 07:20:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73630 00:10:43.812 killing process with pid 73630 00:10:43.812 07:20:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:43.812 07:20:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:43.812 07:20:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73630' 00:10:43.812 07:20:05 -- common/autotest_common.sh@955 -- # kill 73630 00:10:43.812 07:20:05 -- common/autotest_common.sh@960 -- # wait 73630 00:10:44.070 07:20:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:44.070 07:20:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:44.070 07:20:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:44.070 07:20:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.070 07:20:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:44.070 07:20:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.070 07:20:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.070 07:20:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.070 07:20:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:44.070 00:10:44.070 real 0m42.325s 00:10:44.070 user 1m8.310s 00:10:44.070 sys 0m11.513s 00:10:44.070 07:20:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:44.070 07:20:06 -- common/autotest_common.sh@10 -- # set +x 00:10:44.070 ************************************ 00:10:44.070 END TEST nvmf_lvs_grow 00:10:44.070 ************************************ 00:10:44.070 07:20:06 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:44.070 07:20:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:44.070 07:20:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:44.070 07:20:06 -- common/autotest_common.sh@10 -- # set +x 00:10:44.070 ************************************ 00:10:44.070 START TEST nvmf_bdev_io_wait 00:10:44.070 ************************************ 00:10:44.070 07:20:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:44.329 * Looking for test storage... 00:10:44.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:44.329 07:20:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:44.329 07:20:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:44.329 07:20:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:44.329 07:20:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:44.329 07:20:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:44.329 07:20:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:44.329 07:20:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:44.329 07:20:06 -- scripts/common.sh@335 -- # IFS=.-: 00:10:44.329 07:20:06 -- scripts/common.sh@335 -- # read -ra ver1 00:10:44.329 07:20:06 -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.329 07:20:06 -- scripts/common.sh@336 -- # read -ra ver2 00:10:44.329 07:20:06 -- scripts/common.sh@337 -- # local 'op=<' 00:10:44.329 07:20:06 -- scripts/common.sh@339 -- # ver1_l=2 00:10:44.329 07:20:06 -- scripts/common.sh@340 -- # ver2_l=1 00:10:44.329 07:20:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:44.329 07:20:06 -- scripts/common.sh@343 -- # case "$op" in 00:10:44.329 07:20:06 -- scripts/common.sh@344 -- # : 1 00:10:44.329 07:20:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:44.329 07:20:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.329 07:20:06 -- scripts/common.sh@364 -- # decimal 1 00:10:44.329 07:20:06 -- scripts/common.sh@352 -- # local d=1 00:10:44.329 07:20:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.329 07:20:06 -- scripts/common.sh@354 -- # echo 1 00:10:44.329 07:20:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:44.329 07:20:06 -- scripts/common.sh@365 -- # decimal 2 00:10:44.329 07:20:06 -- scripts/common.sh@352 -- # local d=2 00:10:44.329 07:20:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.329 07:20:06 -- scripts/common.sh@354 -- # echo 2 00:10:44.329 07:20:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:44.329 07:20:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:44.329 07:20:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:44.329 07:20:06 -- scripts/common.sh@367 -- # return 0 00:10:44.329 07:20:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.329 07:20:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.329 --rc genhtml_branch_coverage=1 00:10:44.329 --rc genhtml_function_coverage=1 00:10:44.329 --rc genhtml_legend=1 00:10:44.329 --rc geninfo_all_blocks=1 00:10:44.329 --rc geninfo_unexecuted_blocks=1 00:10:44.329 00:10:44.329 ' 00:10:44.329 07:20:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.329 --rc genhtml_branch_coverage=1 00:10:44.329 --rc genhtml_function_coverage=1 00:10:44.329 --rc genhtml_legend=1 00:10:44.329 --rc geninfo_all_blocks=1 00:10:44.329 --rc geninfo_unexecuted_blocks=1 00:10:44.329 00:10:44.329 ' 00:10:44.329 07:20:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.329 --rc genhtml_branch_coverage=1 00:10:44.329 --rc genhtml_function_coverage=1 00:10:44.329 --rc genhtml_legend=1 00:10:44.329 --rc geninfo_all_blocks=1 00:10:44.329 --rc geninfo_unexecuted_blocks=1 00:10:44.329 00:10:44.329 ' 00:10:44.329 07:20:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:44.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.329 --rc genhtml_branch_coverage=1 00:10:44.329 --rc genhtml_function_coverage=1 00:10:44.329 --rc genhtml_legend=1 00:10:44.329 --rc geninfo_all_blocks=1 00:10:44.329 --rc geninfo_unexecuted_blocks=1 00:10:44.329 00:10:44.329 ' 00:10:44.329 07:20:06 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:44.329 07:20:06 -- nvmf/common.sh@7 -- # uname -s 00:10:44.329 07:20:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.329 07:20:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.329 07:20:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.329 07:20:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.329 07:20:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.329 07:20:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.329 07:20:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.329 07:20:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.329 07:20:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.329 07:20:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.329 07:20:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:10:44.329 07:20:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:10:44.329 07:20:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.329 07:20:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.329 07:20:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:44.329 07:20:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.330 07:20:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.330 07:20:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.330 07:20:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.330 07:20:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.330 07:20:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.330 07:20:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.330 07:20:06 -- paths/export.sh@5 -- # export PATH 00:10:44.330 07:20:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.330 07:20:06 -- nvmf/common.sh@46 -- # : 0 00:10:44.330 07:20:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:44.330 07:20:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:44.330 07:20:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:44.330 07:20:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.330 07:20:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.330 07:20:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:44.330 07:20:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:44.330 07:20:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:44.330 07:20:06 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.330 07:20:06 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.330 07:20:06 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:44.330 07:20:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:44.330 07:20:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.330 07:20:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:44.330 07:20:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:44.330 07:20:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:44.330 07:20:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.330 07:20:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.330 07:20:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.330 07:20:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:44.330 07:20:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:44.330 07:20:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:44.330 07:20:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:44.330 07:20:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:44.330 07:20:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:44.330 07:20:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.330 07:20:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.330 07:20:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:44.330 07:20:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:44.330 07:20:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:44.330 07:20:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:44.330 07:20:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:44.330 07:20:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.330 07:20:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:44.330 07:20:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:44.330 07:20:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:44.330 07:20:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:44.330 07:20:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:44.330 07:20:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:44.330 Cannot find device "nvmf_tgt_br" 00:10:44.330 07:20:06 -- nvmf/common.sh@154 -- # true 00:10:44.330 07:20:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.330 Cannot find device "nvmf_tgt_br2" 00:10:44.330 07:20:06 -- nvmf/common.sh@155 -- # true 00:10:44.330 07:20:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:44.330 07:20:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:44.330 Cannot find device "nvmf_tgt_br" 00:10:44.330 07:20:06 -- nvmf/common.sh@157 -- # true 00:10:44.330 07:20:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:44.330 Cannot find device "nvmf_tgt_br2" 00:10:44.330 07:20:06 -- nvmf/common.sh@158 -- # true 00:10:44.330 07:20:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:44.589 07:20:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:44.589 07:20:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.589 07:20:06 -- nvmf/common.sh@161 -- # true 00:10:44.589 07:20:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.589 07:20:06 -- nvmf/common.sh@162 -- # true 00:10:44.589 07:20:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:44.589 07:20:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:44.589 07:20:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:44.589 07:20:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:44.589 07:20:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:44.589 07:20:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:44.589 07:20:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:44.589 07:20:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:44.589 07:20:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:44.589 07:20:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:44.589 07:20:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:44.589 07:20:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:44.589 07:20:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:44.589 07:20:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:44.589 07:20:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:44.589 07:20:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:44.589 07:20:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:44.589 07:20:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:44.589 07:20:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:44.589 07:20:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:44.589 07:20:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:44.589 07:20:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:44.589 07:20:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:44.589 07:20:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:44.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:10:44.589 00:10:44.589 --- 10.0.0.2 ping statistics --- 00:10:44.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.589 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:44.589 07:20:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:44.589 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:44.589 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:10:44.589 00:10:44.589 --- 10.0.0.3 ping statistics --- 00:10:44.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.589 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:44.589 07:20:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:44.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:44.589 00:10:44.589 --- 10.0.0.1 ping statistics --- 00:10:44.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.589 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:44.589 07:20:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.589 07:20:06 -- nvmf/common.sh@421 -- # return 0 00:10:44.589 07:20:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:44.589 07:20:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.589 07:20:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:44.589 07:20:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:44.589 07:20:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.589 07:20:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:44.589 07:20:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:44.589 07:20:06 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:44.589 07:20:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:44.589 07:20:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:44.589 07:20:06 -- common/autotest_common.sh@10 -- # set +x 00:10:44.848 07:20:06 -- nvmf/common.sh@469 -- # nvmfpid=73952 00:10:44.848 07:20:06 -- nvmf/common.sh@470 -- # waitforlisten 73952 00:10:44.848 07:20:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:44.848 07:20:06 -- common/autotest_common.sh@829 -- # '[' -z 73952 ']' 00:10:44.848 07:20:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.848 07:20:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:44.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.848 07:20:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.848 07:20:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:44.848 07:20:06 -- common/autotest_common.sh@10 -- # set +x 00:10:44.848 [2024-11-28 07:20:06.918860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:44.848 [2024-11-28 07:20:06.918970] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.848 [2024-11-28 07:20:07.064212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.106 [2024-11-28 07:20:07.196972] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:45.106 [2024-11-28 07:20:07.197780] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.106 [2024-11-28 07:20:07.198080] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.106 [2024-11-28 07:20:07.198400] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.106 [2024-11-28 07:20:07.198816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.106 [2024-11-28 07:20:07.198963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.106 [2024-11-28 07:20:07.199063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.106 [2024-11-28 07:20:07.199061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.041 07:20:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:46.041 07:20:07 -- common/autotest_common.sh@862 -- # return 0 00:10:46.041 07:20:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:46.041 07:20:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:46.041 07:20:07 -- common/autotest_common.sh@10 -- # set +x 00:10:46.041 07:20:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.041 07:20:07 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:46.041 07:20:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.041 07:20:07 -- common/autotest_common.sh@10 -- # set +x 00:10:46.041 07:20:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:46.041 07:20:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.041 07:20:08 -- common/autotest_common.sh@10 -- # set +x 00:10:46.041 07:20:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.041 07:20:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.041 07:20:08 -- common/autotest_common.sh@10 -- # set +x 00:10:46.041 [2024-11-28 07:20:08.098348] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.041 07:20:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.041 07:20:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.041 07:20:08 -- common/autotest_common.sh@10 -- # set +x 00:10:46.041 Malloc0 00:10:46.041 07:20:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.041 07:20:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.041 07:20:08 -- common/autotest_common.sh@10 -- # set +x 00:10:46.041 07:20:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.041 07:20:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.041 07:20:08 -- common/autotest_common.sh@10 -- # set +x 00:10:46.041 07:20:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.041 07:20:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.041 07:20:08 -- common/autotest_common.sh@10 -- # set +x 00:10:46.041 [2024-11-28 07:20:08.166588] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.041 07:20:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73988 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:46.041 07:20:08 -- nvmf/common.sh@520 -- # config=() 00:10:46.041 07:20:08 -- nvmf/common.sh@520 -- # local subsystem config 00:10:46.041 07:20:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@30 -- # READ_PID=73990 00:10:46.041 07:20:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:46.041 { 00:10:46.041 "params": { 00:10:46.041 "name": "Nvme$subsystem", 00:10:46.041 "trtype": "$TEST_TRANSPORT", 00:10:46.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.041 "adrfam": "ipv4", 00:10:46.041 "trsvcid": "$NVMF_PORT", 00:10:46.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.041 "hdgst": ${hdgst:-false}, 00:10:46.041 "ddgst": ${ddgst:-false} 00:10:46.041 }, 00:10:46.041 "method": "bdev_nvme_attach_controller" 00:10:46.041 } 00:10:46.041 EOF 00:10:46.041 )") 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73992 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:46.041 07:20:08 -- nvmf/common.sh@520 -- # config=() 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:46.041 07:20:08 -- nvmf/common.sh@520 -- # local subsystem config 00:10:46.041 07:20:08 -- nvmf/common.sh@542 -- # cat 00:10:46.041 07:20:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:46.041 07:20:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:46.041 { 00:10:46.041 "params": { 00:10:46.041 "name": "Nvme$subsystem", 00:10:46.041 "trtype": "$TEST_TRANSPORT", 00:10:46.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.041 "adrfam": "ipv4", 00:10:46.041 "trsvcid": "$NVMF_PORT", 00:10:46.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.041 "hdgst": ${hdgst:-false}, 00:10:46.041 "ddgst": ${ddgst:-false} 00:10:46.041 }, 00:10:46.041 "method": "bdev_nvme_attach_controller" 00:10:46.041 } 00:10:46.041 EOF 00:10:46.041 )") 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73995 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@35 -- # sync 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:46.041 07:20:08 -- nvmf/common.sh@542 -- # cat 00:10:46.041 07:20:08 -- nvmf/common.sh@520 -- # config=() 00:10:46.041 07:20:08 -- nvmf/common.sh@520 -- # local subsystem config 00:10:46.041 07:20:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:46.041 07:20:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:46.041 { 00:10:46.041 "params": { 00:10:46.041 "name": "Nvme$subsystem", 00:10:46.041 "trtype": "$TEST_TRANSPORT", 00:10:46.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.041 "adrfam": "ipv4", 00:10:46.041 "trsvcid": "$NVMF_PORT", 00:10:46.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.041 "hdgst": ${hdgst:-false}, 00:10:46.041 "ddgst": ${ddgst:-false} 00:10:46.041 }, 00:10:46.041 "method": "bdev_nvme_attach_controller" 00:10:46.041 } 00:10:46.041 EOF 00:10:46.041 )") 00:10:46.041 07:20:08 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:46.041 07:20:08 -- nvmf/common.sh@520 -- # config=() 00:10:46.041 07:20:08 -- nvmf/common.sh@544 -- # jq . 00:10:46.041 07:20:08 -- nvmf/common.sh@520 -- # local subsystem config 00:10:46.041 07:20:08 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:46.041 07:20:08 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:46.041 { 00:10:46.041 "params": { 00:10:46.041 "name": "Nvme$subsystem", 00:10:46.041 "trtype": "$TEST_TRANSPORT", 00:10:46.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.041 "adrfam": "ipv4", 00:10:46.041 "trsvcid": "$NVMF_PORT", 00:10:46.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.041 "hdgst": ${hdgst:-false}, 00:10:46.041 "ddgst": ${ddgst:-false} 00:10:46.041 }, 00:10:46.042 "method": "bdev_nvme_attach_controller" 00:10:46.042 } 00:10:46.042 EOF 00:10:46.042 )") 00:10:46.042 07:20:08 -- nvmf/common.sh@542 -- # cat 00:10:46.042 07:20:08 -- nvmf/common.sh@545 -- # IFS=, 00:10:46.042 07:20:08 -- nvmf/common.sh@544 -- # jq . 00:10:46.042 07:20:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:46.042 "params": { 00:10:46.042 "name": "Nvme1", 00:10:46.042 "trtype": "tcp", 00:10:46.042 "traddr": "10.0.0.2", 00:10:46.042 "adrfam": "ipv4", 00:10:46.042 "trsvcid": "4420", 00:10:46.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.042 "hdgst": false, 00:10:46.042 "ddgst": false 00:10:46.042 }, 00:10:46.042 "method": "bdev_nvme_attach_controller" 00:10:46.042 }' 00:10:46.042 07:20:08 -- nvmf/common.sh@542 -- # cat 00:10:46.042 07:20:08 -- nvmf/common.sh@545 -- # IFS=, 00:10:46.042 07:20:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:46.042 "params": { 00:10:46.042 "name": "Nvme1", 00:10:46.042 "trtype": "tcp", 00:10:46.042 "traddr": "10.0.0.2", 00:10:46.042 "adrfam": "ipv4", 00:10:46.042 "trsvcid": "4420", 00:10:46.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.042 "hdgst": false, 00:10:46.042 "ddgst": false 00:10:46.042 }, 00:10:46.042 "method": "bdev_nvme_attach_controller" 00:10:46.042 }' 00:10:46.042 07:20:08 -- nvmf/common.sh@544 -- # jq . 00:10:46.042 07:20:08 -- nvmf/common.sh@545 -- # IFS=, 00:10:46.042 07:20:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:46.042 "params": { 00:10:46.042 "name": "Nvme1", 00:10:46.042 "trtype": "tcp", 00:10:46.042 "traddr": "10.0.0.2", 00:10:46.042 "adrfam": "ipv4", 00:10:46.042 "trsvcid": "4420", 00:10:46.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.042 "hdgst": false, 00:10:46.042 "ddgst": false 00:10:46.042 }, 00:10:46.042 "method": "bdev_nvme_attach_controller" 00:10:46.042 }' 00:10:46.042 07:20:08 -- nvmf/common.sh@544 -- # jq . 00:10:46.042 07:20:08 -- nvmf/common.sh@545 -- # IFS=, 00:10:46.042 07:20:08 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:46.042 "params": { 00:10:46.042 "name": "Nvme1", 00:10:46.042 "trtype": "tcp", 00:10:46.042 "traddr": "10.0.0.2", 00:10:46.042 "adrfam": "ipv4", 00:10:46.042 "trsvcid": "4420", 00:10:46.042 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.042 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.042 "hdgst": false, 00:10:46.042 "ddgst": false 00:10:46.042 }, 00:10:46.042 "method": "bdev_nvme_attach_controller" 00:10:46.042 }' 00:10:46.042 [2024-11-28 07:20:08.238249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:46.042 [2024-11-28 07:20:08.238614] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:46.042 [2024-11-28 07:20:08.240045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:46.042 [2024-11-28 07:20:08.240289] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:46.042 [2024-11-28 07:20:08.240698] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:46.042 [2024-11-28 07:20:08.241073] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:46.042 07:20:08 -- target/bdev_io_wait.sh@37 -- # wait 73988 00:10:46.042 [2024-11-28 07:20:08.251416] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:46.042 [2024-11-28 07:20:08.251745] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:46.301 [2024-11-28 07:20:08.453966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.301 [2024-11-28 07:20:08.527586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:46.301 [2024-11-28 07:20:08.557382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.560 [2024-11-28 07:20:08.635160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.560 [2024-11-28 07:20:08.653863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:46.560 [2024-11-28 07:20:08.709472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:10:46.560 Running I/O for 1 seconds... 00:10:46.560 [2024-11-28 07:20:08.736620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.560 [2024-11-28 07:20:08.812269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:46.560 Running I/O for 1 seconds... 00:10:46.819 Running I/O for 1 seconds... 00:10:46.819 Running I/O for 1 seconds... 00:10:47.756 00:10:47.756 Latency(us) 00:10:47.756 [2024-11-28T07:20:10.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.756 [2024-11-28T07:20:10.031Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:47.756 Nvme1n1 : 1.01 8326.32 32.52 0.00 0.00 15293.93 8698.41 28716.68 00:10:47.756 [2024-11-28T07:20:10.031Z] =================================================================================================================== 00:10:47.756 [2024-11-28T07:20:10.031Z] Total : 8326.32 32.52 0.00 0.00 15293.93 8698.41 28716.68 00:10:47.756 00:10:47.756 Latency(us) 00:10:47.756 [2024-11-28T07:20:10.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.756 [2024-11-28T07:20:10.031Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:47.756 Nvme1n1 : 1.01 5213.42 20.36 0.00 0.00 24377.65 12034.79 36223.53 00:10:47.756 [2024-11-28T07:20:10.031Z] =================================================================================================================== 00:10:47.756 [2024-11-28T07:20:10.031Z] Total : 5213.42 20.36 0.00 0.00 24377.65 12034.79 36223.53 00:10:47.756 00:10:47.756 Latency(us) 00:10:47.756 [2024-11-28T07:20:10.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.756 [2024-11-28T07:20:10.031Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:47.756 Nvme1n1 : 1.01 7020.68 27.42 0.00 0.00 18140.81 7983.48 25618.62 00:10:47.756 [2024-11-28T07:20:10.031Z] =================================================================================================================== 00:10:47.756 [2024-11-28T07:20:10.031Z] Total : 7020.68 27.42 0.00 0.00 18140.81 7983.48 25618.62 00:10:47.756 07:20:09 -- target/bdev_io_wait.sh@38 -- # wait 73990 00:10:47.756 00:10:47.756 Latency(us) 00:10:47.756 [2024-11-28T07:20:10.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.756 [2024-11-28T07:20:10.031Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:47.756 Nvme1n1 : 1.00 178673.78 697.94 0.00 0.00 713.70 342.57 837.82 00:10:47.756 [2024-11-28T07:20:10.031Z] =================================================================================================================== 00:10:47.756 [2024-11-28T07:20:10.031Z] Total : 178673.78 697.94 0.00 0.00 713.70 342.57 837.82 00:10:48.015 07:20:10 -- target/bdev_io_wait.sh@39 -- # wait 73992 00:10:48.015 07:20:10 -- target/bdev_io_wait.sh@40 -- # wait 73995 00:10:48.015 07:20:10 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.015 07:20:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.015 07:20:10 -- common/autotest_common.sh@10 -- # set +x 00:10:48.015 07:20:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.015 07:20:10 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:48.015 07:20:10 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:48.015 07:20:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:48.015 07:20:10 -- nvmf/common.sh@116 -- # sync 00:10:48.273 07:20:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:48.273 07:20:10 -- nvmf/common.sh@119 -- # set +e 00:10:48.273 07:20:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:48.273 07:20:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:48.273 rmmod nvme_tcp 00:10:48.273 rmmod nvme_fabrics 00:10:48.273 rmmod nvme_keyring 00:10:48.273 07:20:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:48.273 07:20:10 -- nvmf/common.sh@123 -- # set -e 00:10:48.273 07:20:10 -- nvmf/common.sh@124 -- # return 0 00:10:48.273 07:20:10 -- nvmf/common.sh@477 -- # '[' -n 73952 ']' 00:10:48.274 07:20:10 -- nvmf/common.sh@478 -- # killprocess 73952 00:10:48.274 07:20:10 -- common/autotest_common.sh@936 -- # '[' -z 73952 ']' 00:10:48.274 07:20:10 -- common/autotest_common.sh@940 -- # kill -0 73952 00:10:48.274 07:20:10 -- common/autotest_common.sh@941 -- # uname 00:10:48.274 07:20:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:48.274 07:20:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73952 00:10:48.274 killing process with pid 73952 00:10:48.274 07:20:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:48.274 07:20:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:48.274 07:20:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73952' 00:10:48.274 07:20:10 -- common/autotest_common.sh@955 -- # kill 73952 00:10:48.274 07:20:10 -- common/autotest_common.sh@960 -- # wait 73952 00:10:48.531 07:20:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:48.531 07:20:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:48.531 07:20:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:48.531 07:20:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:48.531 07:20:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:48.531 07:20:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.531 07:20:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:48.531 07:20:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.531 07:20:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:48.531 ************************************ 00:10:48.531 END TEST nvmf_bdev_io_wait 00:10:48.531 ************************************ 00:10:48.531 00:10:48.531 real 0m4.345s 00:10:48.531 user 0m18.629s 00:10:48.531 sys 0m2.204s 00:10:48.531 07:20:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:48.531 07:20:10 -- common/autotest_common.sh@10 -- # set +x 00:10:48.531 07:20:10 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:48.531 07:20:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:48.531 07:20:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.531 07:20:10 -- common/autotest_common.sh@10 -- # set +x 00:10:48.531 ************************************ 00:10:48.531 START TEST nvmf_queue_depth 00:10:48.531 ************************************ 00:10:48.531 07:20:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:48.531 * Looking for test storage... 00:10:48.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:48.531 07:20:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:48.532 07:20:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:48.532 07:20:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:48.791 07:20:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:48.791 07:20:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:48.791 07:20:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:48.791 07:20:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:48.791 07:20:10 -- scripts/common.sh@335 -- # IFS=.-: 00:10:48.791 07:20:10 -- scripts/common.sh@335 -- # read -ra ver1 00:10:48.791 07:20:10 -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.791 07:20:10 -- scripts/common.sh@336 -- # read -ra ver2 00:10:48.791 07:20:10 -- scripts/common.sh@337 -- # local 'op=<' 00:10:48.791 07:20:10 -- scripts/common.sh@339 -- # ver1_l=2 00:10:48.791 07:20:10 -- scripts/common.sh@340 -- # ver2_l=1 00:10:48.791 07:20:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:48.791 07:20:10 -- scripts/common.sh@343 -- # case "$op" in 00:10:48.791 07:20:10 -- scripts/common.sh@344 -- # : 1 00:10:48.791 07:20:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:48.791 07:20:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.791 07:20:10 -- scripts/common.sh@364 -- # decimal 1 00:10:48.791 07:20:10 -- scripts/common.sh@352 -- # local d=1 00:10:48.791 07:20:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.791 07:20:10 -- scripts/common.sh@354 -- # echo 1 00:10:48.791 07:20:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:48.791 07:20:10 -- scripts/common.sh@365 -- # decimal 2 00:10:48.791 07:20:10 -- scripts/common.sh@352 -- # local d=2 00:10:48.791 07:20:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.791 07:20:10 -- scripts/common.sh@354 -- # echo 2 00:10:48.791 07:20:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:48.791 07:20:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:48.791 07:20:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:48.791 07:20:10 -- scripts/common.sh@367 -- # return 0 00:10:48.791 07:20:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.791 07:20:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:48.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.791 --rc genhtml_branch_coverage=1 00:10:48.791 --rc genhtml_function_coverage=1 00:10:48.791 --rc genhtml_legend=1 00:10:48.791 --rc geninfo_all_blocks=1 00:10:48.791 --rc geninfo_unexecuted_blocks=1 00:10:48.791 00:10:48.791 ' 00:10:48.791 07:20:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:48.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.791 --rc genhtml_branch_coverage=1 00:10:48.791 --rc genhtml_function_coverage=1 00:10:48.791 --rc genhtml_legend=1 00:10:48.791 --rc geninfo_all_blocks=1 00:10:48.791 --rc geninfo_unexecuted_blocks=1 00:10:48.791 00:10:48.791 ' 00:10:48.791 07:20:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:48.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.791 --rc genhtml_branch_coverage=1 00:10:48.791 --rc genhtml_function_coverage=1 00:10:48.791 --rc genhtml_legend=1 00:10:48.791 --rc geninfo_all_blocks=1 00:10:48.791 --rc geninfo_unexecuted_blocks=1 00:10:48.791 00:10:48.791 ' 00:10:48.791 07:20:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:48.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.791 --rc genhtml_branch_coverage=1 00:10:48.791 --rc genhtml_function_coverage=1 00:10:48.791 --rc genhtml_legend=1 00:10:48.791 --rc geninfo_all_blocks=1 00:10:48.791 --rc geninfo_unexecuted_blocks=1 00:10:48.791 00:10:48.791 ' 00:10:48.791 07:20:10 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:48.791 07:20:10 -- nvmf/common.sh@7 -- # uname -s 00:10:48.791 07:20:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.791 07:20:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.791 07:20:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.791 07:20:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.791 07:20:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.791 07:20:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.791 07:20:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.791 07:20:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.791 07:20:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.791 07:20:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.791 07:20:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:10:48.791 07:20:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:10:48.791 07:20:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.791 07:20:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.791 07:20:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:48.791 07:20:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:48.791 07:20:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.791 07:20:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.791 07:20:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.791 07:20:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.791 07:20:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.791 07:20:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.791 07:20:10 -- paths/export.sh@5 -- # export PATH 00:10:48.791 07:20:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.791 07:20:10 -- nvmf/common.sh@46 -- # : 0 00:10:48.791 07:20:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:48.791 07:20:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:48.791 07:20:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:48.791 07:20:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.791 07:20:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.791 07:20:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:48.791 07:20:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:48.791 07:20:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:48.791 07:20:10 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:48.791 07:20:10 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:48.791 07:20:10 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:48.791 07:20:10 -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:48.791 07:20:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:48.791 07:20:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.791 07:20:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:48.791 07:20:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:48.791 07:20:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:48.791 07:20:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.791 07:20:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:48.791 07:20:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.791 07:20:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:48.791 07:20:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:48.791 07:20:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:48.791 07:20:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:48.791 07:20:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:48.791 07:20:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:48.791 07:20:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.791 07:20:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.792 07:20:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:48.792 07:20:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:48.792 07:20:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:48.792 07:20:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:48.792 07:20:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:48.792 07:20:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.792 07:20:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:48.792 07:20:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:48.792 07:20:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:48.792 07:20:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:48.792 07:20:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:48.792 07:20:10 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:48.792 Cannot find device "nvmf_tgt_br" 00:10:48.792 07:20:10 -- nvmf/common.sh@154 -- # true 00:10:48.792 07:20:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:48.792 Cannot find device "nvmf_tgt_br2" 00:10:48.792 07:20:10 -- nvmf/common.sh@155 -- # true 00:10:48.792 07:20:10 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:48.792 07:20:10 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:48.792 Cannot find device "nvmf_tgt_br" 00:10:48.792 07:20:10 -- nvmf/common.sh@157 -- # true 00:10:48.792 07:20:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:48.792 Cannot find device "nvmf_tgt_br2" 00:10:48.792 07:20:10 -- nvmf/common.sh@158 -- # true 00:10:48.792 07:20:10 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:48.792 07:20:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:48.792 07:20:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:48.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.792 07:20:11 -- nvmf/common.sh@161 -- # true 00:10:48.792 07:20:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:48.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:48.792 07:20:11 -- nvmf/common.sh@162 -- # true 00:10:48.792 07:20:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:48.792 07:20:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:48.792 07:20:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:48.792 07:20:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:49.050 07:20:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:49.050 07:20:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:49.050 07:20:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:49.050 07:20:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:49.050 07:20:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:49.050 07:20:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:49.050 07:20:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:49.050 07:20:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:49.050 07:20:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:49.050 07:20:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:49.050 07:20:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:49.050 07:20:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:49.050 07:20:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:49.050 07:20:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:49.050 07:20:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:49.050 07:20:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:49.050 07:20:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:49.050 07:20:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:49.050 07:20:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:49.050 07:20:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:49.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:10:49.050 00:10:49.051 --- 10.0.0.2 ping statistics --- 00:10:49.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.051 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:49.051 07:20:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:49.051 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:49.051 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:10:49.051 00:10:49.051 --- 10.0.0.3 ping statistics --- 00:10:49.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.051 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:49.051 07:20:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:49.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:49.051 00:10:49.051 --- 10.0.0.1 ping statistics --- 00:10:49.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.051 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:49.051 07:20:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.051 07:20:11 -- nvmf/common.sh@421 -- # return 0 00:10:49.051 07:20:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:49.051 07:20:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.051 07:20:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:49.051 07:20:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:49.051 07:20:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.051 07:20:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:49.051 07:20:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:49.051 07:20:11 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:49.051 07:20:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:49.051 07:20:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:49.051 07:20:11 -- common/autotest_common.sh@10 -- # set +x 00:10:49.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.051 07:20:11 -- nvmf/common.sh@469 -- # nvmfpid=74240 00:10:49.051 07:20:11 -- nvmf/common.sh@470 -- # waitforlisten 74240 00:10:49.051 07:20:11 -- common/autotest_common.sh@829 -- # '[' -z 74240 ']' 00:10:49.051 07:20:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.051 07:20:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:49.051 07:20:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:49.051 07:20:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.051 07:20:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:49.051 07:20:11 -- common/autotest_common.sh@10 -- # set +x 00:10:49.051 [2024-11-28 07:20:11.291326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:49.051 [2024-11-28 07:20:11.291434] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.310 [2024-11-28 07:20:11.427968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.310 [2024-11-28 07:20:11.522528] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:49.310 [2024-11-28 07:20:11.522696] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.310 [2024-11-28 07:20:11.522710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.310 [2024-11-28 07:20:11.522734] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.310 [2024-11-28 07:20:11.522767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.246 07:20:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:50.246 07:20:12 -- common/autotest_common.sh@862 -- # return 0 00:10:50.246 07:20:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:50.246 07:20:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:50.246 07:20:12 -- common/autotest_common.sh@10 -- # set +x 00:10:50.246 07:20:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.246 07:20:12 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.246 07:20:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.246 07:20:12 -- common/autotest_common.sh@10 -- # set +x 00:10:50.246 [2024-11-28 07:20:12.322683] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.246 07:20:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.246 07:20:12 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:50.246 07:20:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.246 07:20:12 -- common/autotest_common.sh@10 -- # set +x 00:10:50.246 Malloc0 00:10:50.246 07:20:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.246 07:20:12 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.246 07:20:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.246 07:20:12 -- common/autotest_common.sh@10 -- # set +x 00:10:50.246 07:20:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.246 07:20:12 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.246 07:20:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.246 07:20:12 -- common/autotest_common.sh@10 -- # set +x 00:10:50.246 07:20:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.246 07:20:12 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.246 07:20:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.246 07:20:12 -- common/autotest_common.sh@10 -- # set +x 00:10:50.246 [2024-11-28 07:20:12.384061] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.246 07:20:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.246 07:20:12 -- target/queue_depth.sh@30 -- # bdevperf_pid=74273 00:10:50.246 07:20:12 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:50.246 07:20:12 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:50.246 07:20:12 -- target/queue_depth.sh@33 -- # waitforlisten 74273 /var/tmp/bdevperf.sock 00:10:50.246 07:20:12 -- common/autotest_common.sh@829 -- # '[' -z 74273 ']' 00:10:50.246 07:20:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:50.246 07:20:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:50.246 07:20:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:50.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:50.246 07:20:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:50.246 07:20:12 -- common/autotest_common.sh@10 -- # set +x 00:10:50.246 [2024-11-28 07:20:12.437956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:50.246 [2024-11-28 07:20:12.438640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74273 ] 00:10:50.506 [2024-11-28 07:20:12.582705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.506 [2024-11-28 07:20:12.675225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.452 07:20:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:51.452 07:20:13 -- common/autotest_common.sh@862 -- # return 0 00:10:51.452 07:20:13 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:51.452 07:20:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.452 07:20:13 -- common/autotest_common.sh@10 -- # set +x 00:10:51.452 NVMe0n1 00:10:51.453 07:20:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.453 07:20:13 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:51.453 Running I/O for 10 seconds... 00:11:01.454 00:11:01.454 Latency(us) 00:11:01.454 [2024-11-28T07:20:23.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.454 [2024-11-28T07:20:23.729Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:01.454 Verification LBA range: start 0x0 length 0x4000 00:11:01.454 NVMe0n1 : 10.06 15069.53 58.87 0.00 0.00 67689.50 14954.12 57671.68 00:11:01.454 [2024-11-28T07:20:23.729Z] =================================================================================================================== 00:11:01.454 [2024-11-28T07:20:23.729Z] Total : 15069.53 58.87 0.00 0.00 67689.50 14954.12 57671.68 00:11:01.454 0 00:11:01.714 07:20:23 -- target/queue_depth.sh@39 -- # killprocess 74273 00:11:01.714 07:20:23 -- common/autotest_common.sh@936 -- # '[' -z 74273 ']' 00:11:01.714 07:20:23 -- common/autotest_common.sh@940 -- # kill -0 74273 00:11:01.714 07:20:23 -- common/autotest_common.sh@941 -- # uname 00:11:01.714 07:20:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:01.714 07:20:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74273 00:11:01.714 killing process with pid 74273 00:11:01.714 Received shutdown signal, test time was about 10.000000 seconds 00:11:01.714 00:11:01.714 Latency(us) 00:11:01.714 [2024-11-28T07:20:23.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:01.714 [2024-11-28T07:20:23.989Z] =================================================================================================================== 00:11:01.714 [2024-11-28T07:20:23.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:01.714 07:20:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:01.714 07:20:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:01.714 07:20:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74273' 00:11:01.714 07:20:23 -- common/autotest_common.sh@955 -- # kill 74273 00:11:01.714 07:20:23 -- common/autotest_common.sh@960 -- # wait 74273 00:11:01.974 07:20:23 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:01.974 07:20:23 -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:01.974 07:20:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:01.974 07:20:23 -- nvmf/common.sh@116 -- # sync 00:11:01.974 07:20:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:01.974 07:20:24 -- nvmf/common.sh@119 -- # set +e 00:11:01.974 07:20:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:01.974 07:20:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:01.974 rmmod nvme_tcp 00:11:01.974 rmmod nvme_fabrics 00:11:01.974 rmmod nvme_keyring 00:11:01.974 07:20:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:01.974 07:20:24 -- nvmf/common.sh@123 -- # set -e 00:11:01.974 07:20:24 -- nvmf/common.sh@124 -- # return 0 00:11:01.974 07:20:24 -- nvmf/common.sh@477 -- # '[' -n 74240 ']' 00:11:01.974 07:20:24 -- nvmf/common.sh@478 -- # killprocess 74240 00:11:01.974 07:20:24 -- common/autotest_common.sh@936 -- # '[' -z 74240 ']' 00:11:01.974 07:20:24 -- common/autotest_common.sh@940 -- # kill -0 74240 00:11:01.974 07:20:24 -- common/autotest_common.sh@941 -- # uname 00:11:01.974 07:20:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:01.974 07:20:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74240 00:11:01.974 killing process with pid 74240 00:11:01.974 07:20:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:01.974 07:20:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:01.974 07:20:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74240' 00:11:01.974 07:20:24 -- common/autotest_common.sh@955 -- # kill 74240 00:11:01.974 07:20:24 -- common/autotest_common.sh@960 -- # wait 74240 00:11:02.234 07:20:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:02.234 07:20:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:02.234 07:20:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:02.234 07:20:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.234 07:20:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:02.234 07:20:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.234 07:20:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.234 07:20:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.234 07:20:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:02.234 ************************************ 00:11:02.234 END TEST nvmf_queue_depth 00:11:02.234 ************************************ 00:11:02.234 00:11:02.234 real 0m13.709s 00:11:02.234 user 0m23.839s 00:11:02.234 sys 0m2.059s 00:11:02.234 07:20:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:02.234 07:20:24 -- common/autotest_common.sh@10 -- # set +x 00:11:02.234 07:20:24 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:02.234 07:20:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:02.234 07:20:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.234 07:20:24 -- common/autotest_common.sh@10 -- # set +x 00:11:02.234 ************************************ 00:11:02.234 START TEST nvmf_multipath 00:11:02.234 ************************************ 00:11:02.234 07:20:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:02.234 * Looking for test storage... 00:11:02.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.494 07:20:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:02.494 07:20:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:02.494 07:20:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:02.494 07:20:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:02.494 07:20:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:02.494 07:20:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:02.494 07:20:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:02.494 07:20:24 -- scripts/common.sh@335 -- # IFS=.-: 00:11:02.494 07:20:24 -- scripts/common.sh@335 -- # read -ra ver1 00:11:02.494 07:20:24 -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.494 07:20:24 -- scripts/common.sh@336 -- # read -ra ver2 00:11:02.494 07:20:24 -- scripts/common.sh@337 -- # local 'op=<' 00:11:02.494 07:20:24 -- scripts/common.sh@339 -- # ver1_l=2 00:11:02.494 07:20:24 -- scripts/common.sh@340 -- # ver2_l=1 00:11:02.494 07:20:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:02.494 07:20:24 -- scripts/common.sh@343 -- # case "$op" in 00:11:02.494 07:20:24 -- scripts/common.sh@344 -- # : 1 00:11:02.494 07:20:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:02.494 07:20:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.494 07:20:24 -- scripts/common.sh@364 -- # decimal 1 00:11:02.494 07:20:24 -- scripts/common.sh@352 -- # local d=1 00:11:02.494 07:20:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.494 07:20:24 -- scripts/common.sh@354 -- # echo 1 00:11:02.494 07:20:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:02.494 07:20:24 -- scripts/common.sh@365 -- # decimal 2 00:11:02.494 07:20:24 -- scripts/common.sh@352 -- # local d=2 00:11:02.494 07:20:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.494 07:20:24 -- scripts/common.sh@354 -- # echo 2 00:11:02.494 07:20:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:02.494 07:20:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:02.494 07:20:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:02.494 07:20:24 -- scripts/common.sh@367 -- # return 0 00:11:02.494 07:20:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.494 07:20:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:02.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.494 --rc genhtml_branch_coverage=1 00:11:02.494 --rc genhtml_function_coverage=1 00:11:02.494 --rc genhtml_legend=1 00:11:02.494 --rc geninfo_all_blocks=1 00:11:02.494 --rc geninfo_unexecuted_blocks=1 00:11:02.494 00:11:02.494 ' 00:11:02.494 07:20:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:02.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.494 --rc genhtml_branch_coverage=1 00:11:02.494 --rc genhtml_function_coverage=1 00:11:02.494 --rc genhtml_legend=1 00:11:02.494 --rc geninfo_all_blocks=1 00:11:02.494 --rc geninfo_unexecuted_blocks=1 00:11:02.494 00:11:02.494 ' 00:11:02.494 07:20:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:02.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.494 --rc genhtml_branch_coverage=1 00:11:02.494 --rc genhtml_function_coverage=1 00:11:02.494 --rc genhtml_legend=1 00:11:02.494 --rc geninfo_all_blocks=1 00:11:02.494 --rc geninfo_unexecuted_blocks=1 00:11:02.494 00:11:02.494 ' 00:11:02.494 07:20:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:02.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.494 --rc genhtml_branch_coverage=1 00:11:02.494 --rc genhtml_function_coverage=1 00:11:02.494 --rc genhtml_legend=1 00:11:02.494 --rc geninfo_all_blocks=1 00:11:02.494 --rc geninfo_unexecuted_blocks=1 00:11:02.494 00:11:02.494 ' 00:11:02.494 07:20:24 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.494 07:20:24 -- nvmf/common.sh@7 -- # uname -s 00:11:02.494 07:20:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.494 07:20:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.494 07:20:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.494 07:20:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.494 07:20:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.494 07:20:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.494 07:20:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.494 07:20:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.494 07:20:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.494 07:20:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.494 07:20:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:11:02.494 07:20:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:11:02.494 07:20:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.494 07:20:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.494 07:20:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.494 07:20:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.494 07:20:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.494 07:20:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.494 07:20:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.494 07:20:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.494 07:20:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.494 07:20:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.494 07:20:24 -- paths/export.sh@5 -- # export PATH 00:11:02.494 07:20:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.494 07:20:24 -- nvmf/common.sh@46 -- # : 0 00:11:02.495 07:20:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:02.495 07:20:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:02.495 07:20:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:02.495 07:20:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.495 07:20:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.495 07:20:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:02.495 07:20:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:02.495 07:20:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:02.495 07:20:24 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:02.495 07:20:24 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:02.495 07:20:24 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:02.495 07:20:24 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:02.495 07:20:24 -- target/multipath.sh@43 -- # nvmftestinit 00:11:02.495 07:20:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:02.495 07:20:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.495 07:20:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:02.495 07:20:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:02.495 07:20:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:02.495 07:20:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.495 07:20:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.495 07:20:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.495 07:20:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:02.495 07:20:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:02.495 07:20:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:02.495 07:20:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:02.495 07:20:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:02.495 07:20:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:02.495 07:20:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.495 07:20:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.495 07:20:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:02.495 07:20:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:02.495 07:20:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:02.495 07:20:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:02.495 07:20:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:02.495 07:20:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.495 07:20:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:02.495 07:20:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:02.495 07:20:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:02.495 07:20:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:02.495 07:20:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:02.495 07:20:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:02.495 Cannot find device "nvmf_tgt_br" 00:11:02.495 07:20:24 -- nvmf/common.sh@154 -- # true 00:11:02.495 07:20:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.495 Cannot find device "nvmf_tgt_br2" 00:11:02.495 07:20:24 -- nvmf/common.sh@155 -- # true 00:11:02.495 07:20:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:02.495 07:20:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:02.495 Cannot find device "nvmf_tgt_br" 00:11:02.495 07:20:24 -- nvmf/common.sh@157 -- # true 00:11:02.495 07:20:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:02.495 Cannot find device "nvmf_tgt_br2" 00:11:02.495 07:20:24 -- nvmf/common.sh@158 -- # true 00:11:02.495 07:20:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:02.495 07:20:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:02.754 07:20:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.754 07:20:24 -- nvmf/common.sh@161 -- # true 00:11:02.754 07:20:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.754 07:20:24 -- nvmf/common.sh@162 -- # true 00:11:02.754 07:20:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:02.754 07:20:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:02.754 07:20:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:02.754 07:20:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:02.754 07:20:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:02.754 07:20:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:02.754 07:20:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:02.754 07:20:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:02.754 07:20:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:02.754 07:20:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:02.754 07:20:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:02.754 07:20:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:02.754 07:20:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:02.754 07:20:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:02.754 07:20:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:02.754 07:20:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:02.754 07:20:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:02.754 07:20:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:02.754 07:20:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:02.754 07:20:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:02.754 07:20:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:02.754 07:20:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:02.754 07:20:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:02.755 07:20:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:02.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:11:02.755 00:11:02.755 --- 10.0.0.2 ping statistics --- 00:11:02.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.755 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:02.755 07:20:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:02.755 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:02.755 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:11:02.755 00:11:02.755 --- 10.0.0.3 ping statistics --- 00:11:02.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.755 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:02.755 07:20:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:02.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:11:02.755 00:11:02.755 --- 10.0.0.1 ping statistics --- 00:11:02.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.755 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:11:02.755 07:20:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.755 07:20:24 -- nvmf/common.sh@421 -- # return 0 00:11:02.755 07:20:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:02.755 07:20:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.755 07:20:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:02.755 07:20:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:02.755 07:20:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.755 07:20:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:02.755 07:20:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:02.755 07:20:24 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:11:02.755 07:20:24 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:02.755 07:20:24 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:02.755 07:20:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:02.755 07:20:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:02.755 07:20:24 -- common/autotest_common.sh@10 -- # set +x 00:11:02.755 07:20:24 -- nvmf/common.sh@469 -- # nvmfpid=74604 00:11:02.755 07:20:24 -- nvmf/common.sh@470 -- # waitforlisten 74604 00:11:02.755 07:20:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.755 07:20:24 -- common/autotest_common.sh@829 -- # '[' -z 74604 ']' 00:11:02.755 07:20:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.755 07:20:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.755 07:20:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.755 07:20:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.755 07:20:24 -- common/autotest_common.sh@10 -- # set +x 00:11:02.755 [2024-11-28 07:20:25.020272] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:02.755 [2024-11-28 07:20:25.020874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.012 [2024-11-28 07:20:25.166159] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.012 [2024-11-28 07:20:25.255353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:03.012 [2024-11-28 07:20:25.255568] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.012 [2024-11-28 07:20:25.255585] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.012 [2024-11-28 07:20:25.255595] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.012 [2024-11-28 07:20:25.255836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.012 [2024-11-28 07:20:25.255990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.012 [2024-11-28 07:20:25.256110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.012 [2024-11-28 07:20:25.256116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.948 07:20:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.948 07:20:26 -- common/autotest_common.sh@862 -- # return 0 00:11:03.948 07:20:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:03.948 07:20:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:03.948 07:20:26 -- common/autotest_common.sh@10 -- # set +x 00:11:03.948 07:20:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.948 07:20:26 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:04.208 [2024-11-28 07:20:26.282329] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.208 07:20:26 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:04.466 Malloc0 00:11:04.466 07:20:26 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:04.725 07:20:26 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.983 07:20:27 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.242 [2024-11-28 07:20:27.282166] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.242 07:20:27 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:05.500 [2024-11-28 07:20:27.530702] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:05.500 07:20:27 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:11:05.500 07:20:27 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:05.759 07:20:27 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.759 07:20:27 -- common/autotest_common.sh@1187 -- # local i=0 00:11:05.759 07:20:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.759 07:20:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:05.759 07:20:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:07.660 07:20:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:07.660 07:20:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:07.660 07:20:29 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.660 07:20:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:07.660 07:20:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.660 07:20:29 -- common/autotest_common.sh@1197 -- # return 0 00:11:07.660 07:20:29 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:07.660 07:20:29 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:07.660 07:20:29 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:07.660 07:20:29 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:07.660 07:20:29 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:07.660 07:20:29 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:07.660 07:20:29 -- target/multipath.sh@38 -- # return 0 00:11:07.660 07:20:29 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:07.660 07:20:29 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:07.660 07:20:29 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:07.660 07:20:29 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:07.660 07:20:29 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:07.660 07:20:29 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:07.660 07:20:29 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:07.660 07:20:29 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:07.660 07:20:29 -- target/multipath.sh@22 -- # local timeout=20 00:11:07.660 07:20:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:07.660 07:20:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:07.660 07:20:29 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:07.660 07:20:29 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:07.660 07:20:29 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:07.660 07:20:29 -- target/multipath.sh@22 -- # local timeout=20 00:11:07.660 07:20:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:07.661 07:20:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:07.661 07:20:29 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:07.661 07:20:29 -- target/multipath.sh@85 -- # echo numa 00:11:07.661 07:20:29 -- target/multipath.sh@88 -- # fio_pid=74694 00:11:07.661 07:20:29 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:07.661 07:20:29 -- target/multipath.sh@90 -- # sleep 1 00:11:07.661 [global] 00:11:07.661 thread=1 00:11:07.661 invalidate=1 00:11:07.661 rw=randrw 00:11:07.661 time_based=1 00:11:07.661 runtime=6 00:11:07.661 ioengine=libaio 00:11:07.661 direct=1 00:11:07.661 bs=4096 00:11:07.661 iodepth=128 00:11:07.661 norandommap=0 00:11:07.661 numjobs=1 00:11:07.661 00:11:07.661 verify_dump=1 00:11:07.661 verify_backlog=512 00:11:07.661 verify_state_save=0 00:11:07.661 do_verify=1 00:11:07.661 verify=crc32c-intel 00:11:07.661 [job0] 00:11:07.661 filename=/dev/nvme0n1 00:11:07.661 Could not set queue depth (nvme0n1) 00:11:07.919 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.919 fio-3.35 00:11:07.919 Starting 1 thread 00:11:08.884 07:20:30 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:09.141 07:20:31 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:09.399 07:20:31 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:09.399 07:20:31 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:09.399 07:20:31 -- target/multipath.sh@22 -- # local timeout=20 00:11:09.399 07:20:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:09.399 07:20:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:09.399 07:20:31 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:09.399 07:20:31 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:09.399 07:20:31 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:09.399 07:20:31 -- target/multipath.sh@22 -- # local timeout=20 00:11:09.399 07:20:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:09.399 07:20:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:09.399 07:20:31 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:09.399 07:20:31 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:09.657 07:20:31 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:09.916 07:20:32 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:09.916 07:20:32 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:09.916 07:20:32 -- target/multipath.sh@22 -- # local timeout=20 00:11:09.916 07:20:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:09.916 07:20:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:09.916 07:20:32 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:09.916 07:20:32 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:09.916 07:20:32 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:09.916 07:20:32 -- target/multipath.sh@22 -- # local timeout=20 00:11:09.916 07:20:32 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:09.916 07:20:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:09.916 07:20:32 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:09.916 07:20:32 -- target/multipath.sh@104 -- # wait 74694 00:11:14.100 00:11:14.100 job0: (groupid=0, jobs=1): err= 0: pid=74715: Thu Nov 28 07:20:36 2024 00:11:14.100 read: IOPS=11.0k, BW=42.8MiB/s (44.9MB/s)(257MiB/6006msec) 00:11:14.100 slat (usec): min=4, max=7675, avg=52.68, stdev=222.21 00:11:14.100 clat (usec): min=1259, max=15109, avg=7944.11, stdev=1506.91 00:11:14.100 lat (usec): min=1277, max=15146, avg=7996.78, stdev=1512.07 00:11:14.100 clat percentiles (usec): 00:11:14.100 | 1.00th=[ 4113], 5.00th=[ 5669], 10.00th=[ 6587], 20.00th=[ 7177], 00:11:14.100 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:11:14.100 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9503], 95.00th=[11600], 00:11:14.100 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13173], 99.95th=[13173], 00:11:14.100 | 99.99th=[13829] 00:11:14.100 bw ( KiB/s): min=10760, max=26056, per=52.03%, avg=22831.73, stdev=4228.34, samples=11 00:11:14.100 iops : min= 2690, max= 6514, avg=5708.09, stdev=1057.12, samples=11 00:11:14.100 write: IOPS=6188, BW=24.2MiB/s (25.3MB/s)(135MiB/5593msec); 0 zone resets 00:11:14.100 slat (usec): min=11, max=3275, avg=62.35, stdev=137.24 00:11:14.100 clat (usec): min=1587, max=13796, avg=6889.46, stdev=1291.04 00:11:14.100 lat (usec): min=1622, max=13820, avg=6951.81, stdev=1295.65 00:11:14.100 clat percentiles (usec): 00:11:14.100 | 1.00th=[ 3326], 5.00th=[ 4080], 10.00th=[ 4883], 20.00th=[ 6390], 00:11:14.100 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7242], 00:11:14.100 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8225], 00:11:14.100 | 99.00th=[10945], 99.50th=[11469], 99.90th=[12518], 99.95th=[12911], 00:11:14.100 | 99.99th=[13042] 00:11:14.100 bw ( KiB/s): min=11072, max=25580, per=92.26%, avg=22839.64, stdev=4147.41, samples=11 00:11:14.100 iops : min= 2768, max= 6395, avg=5709.91, stdev=1036.85, samples=11 00:11:14.100 lat (msec) : 2=0.03%, 4=2.02%, 10=91.53%, 20=6.43% 00:11:14.100 cpu : usr=6.11%, sys=24.60%, ctx=5791, majf=0, minf=78 00:11:14.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:14.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:14.100 issued rwts: total=65881,34614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.100 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:14.100 00:11:14.101 Run status group 0 (all jobs): 00:11:14.101 READ: bw=42.8MiB/s (44.9MB/s), 42.8MiB/s-42.8MiB/s (44.9MB/s-44.9MB/s), io=257MiB (270MB), run=6006-6006msec 00:11:14.101 WRITE: bw=24.2MiB/s (25.3MB/s), 24.2MiB/s-24.2MiB/s (25.3MB/s-25.3MB/s), io=135MiB (142MB), run=5593-5593msec 00:11:14.101 00:11:14.101 Disk stats (read/write): 00:11:14.101 nvme0n1: ios=65099/33774, merge=0/0, ticks=493496/216449, in_queue=709945, util=98.57% 00:11:14.101 07:20:36 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:11:14.359 07:20:36 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:14.618 07:20:36 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:14.618 07:20:36 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:14.618 07:20:36 -- target/multipath.sh@22 -- # local timeout=20 00:11:14.618 07:20:36 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:14.618 07:20:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:14.618 07:20:36 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:14.618 07:20:36 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:14.618 07:20:36 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:14.618 07:20:36 -- target/multipath.sh@22 -- # local timeout=20 00:11:14.618 07:20:36 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:14.618 07:20:36 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:14.618 07:20:36 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:14.618 07:20:36 -- target/multipath.sh@113 -- # echo round-robin 00:11:14.618 07:20:36 -- target/multipath.sh@116 -- # fio_pid=74797 00:11:14.618 07:20:36 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:14.618 07:20:36 -- target/multipath.sh@118 -- # sleep 1 00:11:14.618 [global] 00:11:14.618 thread=1 00:11:14.618 invalidate=1 00:11:14.618 rw=randrw 00:11:14.618 time_based=1 00:11:14.618 runtime=6 00:11:14.618 ioengine=libaio 00:11:14.618 direct=1 00:11:14.618 bs=4096 00:11:14.618 iodepth=128 00:11:14.618 norandommap=0 00:11:14.618 numjobs=1 00:11:14.618 00:11:14.618 verify_dump=1 00:11:14.618 verify_backlog=512 00:11:14.618 verify_state_save=0 00:11:14.618 do_verify=1 00:11:14.618 verify=crc32c-intel 00:11:14.618 [job0] 00:11:14.618 filename=/dev/nvme0n1 00:11:14.618 Could not set queue depth (nvme0n1) 00:11:14.618 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.618 fio-3.35 00:11:14.618 Starting 1 thread 00:11:15.553 07:20:37 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:11:15.812 07:20:38 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:16.071 07:20:38 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:16.071 07:20:38 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:16.071 07:20:38 -- target/multipath.sh@22 -- # local timeout=20 00:11:16.071 07:20:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:16.071 07:20:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:16.071 07:20:38 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:16.071 07:20:38 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:16.071 07:20:38 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:16.071 07:20:38 -- target/multipath.sh@22 -- # local timeout=20 00:11:16.071 07:20:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:16.071 07:20:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:16.071 07:20:38 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:16.071 07:20:38 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:16.638 07:20:38 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:16.896 07:20:38 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:16.896 07:20:38 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:16.896 07:20:38 -- target/multipath.sh@22 -- # local timeout=20 00:11:16.896 07:20:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:16.896 07:20:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:16.896 07:20:38 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:16.896 07:20:38 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:16.896 07:20:38 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:16.896 07:20:38 -- target/multipath.sh@22 -- # local timeout=20 00:11:16.896 07:20:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:16.896 07:20:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:16.896 07:20:38 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:16.896 07:20:38 -- target/multipath.sh@132 -- # wait 74797 00:11:21.086 00:11:21.086 job0: (groupid=0, jobs=1): err= 0: pid=74818: Thu Nov 28 07:20:42 2024 00:11:21.086 read: IOPS=12.6k, BW=49.0MiB/s (51.4MB/s)(294MiB/6005msec) 00:11:21.086 slat (usec): min=2, max=7453, avg=41.00, stdev=190.94 00:11:21.086 clat (usec): min=481, max=14567, avg=7141.60, stdev=1716.15 00:11:21.086 lat (usec): min=501, max=14576, avg=7182.60, stdev=1729.25 00:11:21.086 clat percentiles (usec): 00:11:21.086 | 1.00th=[ 3130], 5.00th=[ 4080], 10.00th=[ 4817], 20.00th=[ 5800], 00:11:21.086 | 30.00th=[ 6652], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7570], 00:11:21.086 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 8586], 95.00th=[10159], 00:11:21.086 | 99.00th=[12125], 99.50th=[12387], 99.90th=[12911], 99.95th=[13042], 00:11:21.086 | 99.99th=[13304] 00:11:21.086 bw ( KiB/s): min= 9680, max=44784, per=50.06%, avg=25138.67, stdev=9543.94, samples=12 00:11:21.086 iops : min= 2420, max=11196, avg=6284.67, stdev=2385.98, samples=12 00:11:21.086 write: IOPS=7320, BW=28.6MiB/s (30.0MB/s)(147MiB/5153msec); 0 zone resets 00:11:21.086 slat (usec): min=3, max=1854, avg=50.98, stdev=113.19 00:11:21.086 clat (usec): min=541, max=13244, avg=5921.89, stdev=1695.40 00:11:21.086 lat (usec): min=609, max=13268, avg=5972.87, stdev=1708.11 00:11:21.086 clat percentiles (usec): 00:11:21.086 | 1.00th=[ 2474], 5.00th=[ 3130], 10.00th=[ 3556], 20.00th=[ 4146], 00:11:21.086 | 30.00th=[ 4752], 40.00th=[ 5538], 50.00th=[ 6456], 60.00th=[ 6783], 00:11:21.086 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7635], 95.00th=[ 7963], 00:11:21.086 | 99.00th=[10159], 99.50th=[11076], 99.90th=[12125], 99.95th=[12518], 00:11:21.086 | 99.99th=[13042] 00:11:21.086 bw ( KiB/s): min=10080, max=45056, per=85.76%, avg=25114.00, stdev=9295.06, samples=12 00:11:21.086 iops : min= 2520, max=11264, avg=6278.50, stdev=2323.77, samples=12 00:11:21.086 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.03% 00:11:21.086 lat (msec) : 2=0.16%, 4=8.42%, 10=87.52%, 20=3.85% 00:11:21.086 cpu : usr=6.68%, sys=26.91%, ctx=7130, majf=0, minf=90 00:11:21.086 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:21.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:21.086 issued rwts: total=75390,37723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.086 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:21.086 00:11:21.086 Run status group 0 (all jobs): 00:11:21.086 READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=294MiB (309MB), run=6005-6005msec 00:11:21.086 WRITE: bw=28.6MiB/s (30.0MB/s), 28.6MiB/s-28.6MiB/s (30.0MB/s-30.0MB/s), io=147MiB (155MB), run=5153-5153msec 00:11:21.086 00:11:21.086 Disk stats (read/write): 00:11:21.086 nvme0n1: ios=73987/37607, merge=0/0, ticks=489115/198527, in_queue=687642, util=98.62% 00:11:21.086 07:20:42 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:21.086 07:20:43 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.086 07:20:43 -- common/autotest_common.sh@1208 -- # local i=0 00:11:21.086 07:20:43 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.086 07:20:43 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:21.086 07:20:43 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:21.086 07:20:43 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.086 07:20:43 -- common/autotest_common.sh@1220 -- # return 0 00:11:21.086 07:20:43 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.086 07:20:43 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:21.086 07:20:43 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:21.086 07:20:43 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:21.086 07:20:43 -- target/multipath.sh@144 -- # nvmftestfini 00:11:21.086 07:20:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:21.086 07:20:43 -- nvmf/common.sh@116 -- # sync 00:11:21.086 07:20:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:21.086 07:20:43 -- nvmf/common.sh@119 -- # set +e 00:11:21.345 07:20:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:21.345 07:20:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:21.345 rmmod nvme_tcp 00:11:21.345 rmmod nvme_fabrics 00:11:21.345 rmmod nvme_keyring 00:11:21.345 07:20:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:21.345 07:20:43 -- nvmf/common.sh@123 -- # set -e 00:11:21.345 07:20:43 -- nvmf/common.sh@124 -- # return 0 00:11:21.345 07:20:43 -- nvmf/common.sh@477 -- # '[' -n 74604 ']' 00:11:21.345 07:20:43 -- nvmf/common.sh@478 -- # killprocess 74604 00:11:21.345 07:20:43 -- common/autotest_common.sh@936 -- # '[' -z 74604 ']' 00:11:21.345 07:20:43 -- common/autotest_common.sh@940 -- # kill -0 74604 00:11:21.345 07:20:43 -- common/autotest_common.sh@941 -- # uname 00:11:21.345 07:20:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:21.345 07:20:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74604 00:11:21.345 07:20:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:21.345 07:20:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:21.345 killing process with pid 74604 00:11:21.345 07:20:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74604' 00:11:21.345 07:20:43 -- common/autotest_common.sh@955 -- # kill 74604 00:11:21.345 07:20:43 -- common/autotest_common.sh@960 -- # wait 74604 00:11:21.604 07:20:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:21.604 07:20:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:21.604 07:20:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:21.604 07:20:43 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:21.604 07:20:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:21.604 07:20:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.604 07:20:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.604 07:20:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.604 07:20:43 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:21.604 00:11:21.604 real 0m19.279s 00:11:21.604 user 1m13.254s 00:11:21.604 sys 0m9.493s 00:11:21.604 07:20:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:21.604 07:20:43 -- common/autotest_common.sh@10 -- # set +x 00:11:21.604 ************************************ 00:11:21.604 END TEST nvmf_multipath 00:11:21.604 ************************************ 00:11:21.604 07:20:43 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:21.604 07:20:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:21.604 07:20:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.604 07:20:43 -- common/autotest_common.sh@10 -- # set +x 00:11:21.604 ************************************ 00:11:21.604 START TEST nvmf_zcopy 00:11:21.604 ************************************ 00:11:21.604 07:20:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:21.604 * Looking for test storage... 00:11:21.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:21.604 07:20:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:21.604 07:20:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:21.604 07:20:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:21.863 07:20:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:21.863 07:20:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:21.863 07:20:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:21.863 07:20:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:21.863 07:20:43 -- scripts/common.sh@335 -- # IFS=.-: 00:11:21.863 07:20:43 -- scripts/common.sh@335 -- # read -ra ver1 00:11:21.863 07:20:43 -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.863 07:20:43 -- scripts/common.sh@336 -- # read -ra ver2 00:11:21.863 07:20:43 -- scripts/common.sh@337 -- # local 'op=<' 00:11:21.863 07:20:43 -- scripts/common.sh@339 -- # ver1_l=2 00:11:21.863 07:20:43 -- scripts/common.sh@340 -- # ver2_l=1 00:11:21.863 07:20:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:21.863 07:20:43 -- scripts/common.sh@343 -- # case "$op" in 00:11:21.863 07:20:43 -- scripts/common.sh@344 -- # : 1 00:11:21.863 07:20:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:21.863 07:20:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.863 07:20:43 -- scripts/common.sh@364 -- # decimal 1 00:11:21.863 07:20:43 -- scripts/common.sh@352 -- # local d=1 00:11:21.863 07:20:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.863 07:20:43 -- scripts/common.sh@354 -- # echo 1 00:11:21.863 07:20:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:21.863 07:20:43 -- scripts/common.sh@365 -- # decimal 2 00:11:21.863 07:20:43 -- scripts/common.sh@352 -- # local d=2 00:11:21.863 07:20:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.863 07:20:43 -- scripts/common.sh@354 -- # echo 2 00:11:21.863 07:20:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:21.863 07:20:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:21.863 07:20:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:21.863 07:20:43 -- scripts/common.sh@367 -- # return 0 00:11:21.863 07:20:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.863 07:20:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:21.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.863 --rc genhtml_branch_coverage=1 00:11:21.863 --rc genhtml_function_coverage=1 00:11:21.863 --rc genhtml_legend=1 00:11:21.863 --rc geninfo_all_blocks=1 00:11:21.863 --rc geninfo_unexecuted_blocks=1 00:11:21.863 00:11:21.863 ' 00:11:21.863 07:20:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:21.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.863 --rc genhtml_branch_coverage=1 00:11:21.863 --rc genhtml_function_coverage=1 00:11:21.863 --rc genhtml_legend=1 00:11:21.863 --rc geninfo_all_blocks=1 00:11:21.863 --rc geninfo_unexecuted_blocks=1 00:11:21.863 00:11:21.863 ' 00:11:21.863 07:20:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:21.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.863 --rc genhtml_branch_coverage=1 00:11:21.863 --rc genhtml_function_coverage=1 00:11:21.863 --rc genhtml_legend=1 00:11:21.863 --rc geninfo_all_blocks=1 00:11:21.863 --rc geninfo_unexecuted_blocks=1 00:11:21.863 00:11:21.863 ' 00:11:21.863 07:20:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:21.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.863 --rc genhtml_branch_coverage=1 00:11:21.863 --rc genhtml_function_coverage=1 00:11:21.863 --rc genhtml_legend=1 00:11:21.863 --rc geninfo_all_blocks=1 00:11:21.863 --rc geninfo_unexecuted_blocks=1 00:11:21.863 00:11:21.863 ' 00:11:21.863 07:20:43 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:21.863 07:20:43 -- nvmf/common.sh@7 -- # uname -s 00:11:21.863 07:20:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.863 07:20:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.863 07:20:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.863 07:20:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.863 07:20:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.863 07:20:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.863 07:20:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.863 07:20:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.863 07:20:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.863 07:20:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.863 07:20:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:11:21.863 07:20:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:11:21.863 07:20:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.863 07:20:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.863 07:20:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:21.863 07:20:43 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:21.863 07:20:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.863 07:20:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.863 07:20:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.864 07:20:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.864 07:20:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.864 07:20:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.864 07:20:43 -- paths/export.sh@5 -- # export PATH 00:11:21.864 07:20:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.864 07:20:43 -- nvmf/common.sh@46 -- # : 0 00:11:21.864 07:20:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:21.864 07:20:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:21.864 07:20:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:21.864 07:20:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.864 07:20:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.864 07:20:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:21.864 07:20:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:21.864 07:20:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:21.864 07:20:43 -- target/zcopy.sh@12 -- # nvmftestinit 00:11:21.864 07:20:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:21.864 07:20:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.864 07:20:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:21.864 07:20:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:21.864 07:20:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:21.864 07:20:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.864 07:20:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.864 07:20:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.864 07:20:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:21.864 07:20:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:21.864 07:20:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:21.864 07:20:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:21.864 07:20:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:21.864 07:20:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:21.864 07:20:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.864 07:20:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.864 07:20:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:21.864 07:20:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:21.864 07:20:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:21.864 07:20:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:21.864 07:20:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:21.864 07:20:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.864 07:20:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:21.864 07:20:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:21.864 07:20:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:21.864 07:20:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:21.864 07:20:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:21.864 07:20:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:21.864 Cannot find device "nvmf_tgt_br" 00:11:21.864 07:20:44 -- nvmf/common.sh@154 -- # true 00:11:21.864 07:20:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.864 Cannot find device "nvmf_tgt_br2" 00:11:21.864 07:20:44 -- nvmf/common.sh@155 -- # true 00:11:21.864 07:20:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:21.864 07:20:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:21.864 Cannot find device "nvmf_tgt_br" 00:11:21.864 07:20:44 -- nvmf/common.sh@157 -- # true 00:11:21.864 07:20:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:21.864 Cannot find device "nvmf_tgt_br2" 00:11:21.864 07:20:44 -- nvmf/common.sh@158 -- # true 00:11:21.864 07:20:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:21.864 07:20:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:22.123 07:20:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:22.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.124 07:20:44 -- nvmf/common.sh@161 -- # true 00:11:22.124 07:20:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:22.124 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.124 07:20:44 -- nvmf/common.sh@162 -- # true 00:11:22.124 07:20:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:22.124 07:20:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:22.124 07:20:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:22.124 07:20:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:22.124 07:20:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:22.124 07:20:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:22.124 07:20:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:22.124 07:20:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:22.124 07:20:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:22.124 07:20:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:22.124 07:20:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:22.124 07:20:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:22.124 07:20:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:22.124 07:20:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:22.124 07:20:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:22.124 07:20:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:22.124 07:20:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:22.124 07:20:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:22.124 07:20:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:22.124 07:20:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:22.124 07:20:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:22.124 07:20:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:22.124 07:20:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:22.124 07:20:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:22.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:11:22.124 00:11:22.124 --- 10.0.0.2 ping statistics --- 00:11:22.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.124 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:22.124 07:20:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:22.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:22.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:11:22.124 00:11:22.124 --- 10.0.0.3 ping statistics --- 00:11:22.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.124 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:22.124 07:20:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:22.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:22.124 00:11:22.124 --- 10.0.0.1 ping statistics --- 00:11:22.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.124 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:22.124 07:20:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.124 07:20:44 -- nvmf/common.sh@421 -- # return 0 00:11:22.124 07:20:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:22.124 07:20:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.124 07:20:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:22.124 07:20:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:22.124 07:20:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.124 07:20:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:22.124 07:20:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:22.124 07:20:44 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:22.124 07:20:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:22.124 07:20:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:22.124 07:20:44 -- common/autotest_common.sh@10 -- # set +x 00:11:22.124 07:20:44 -- nvmf/common.sh@469 -- # nvmfpid=75075 00:11:22.124 07:20:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:22.124 07:20:44 -- nvmf/common.sh@470 -- # waitforlisten 75075 00:11:22.124 07:20:44 -- common/autotest_common.sh@829 -- # '[' -z 75075 ']' 00:11:22.124 07:20:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.124 07:20:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.124 07:20:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.124 07:20:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.124 07:20:44 -- common/autotest_common.sh@10 -- # set +x 00:11:22.383 [2024-11-28 07:20:44.407739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:22.383 [2024-11-28 07:20:44.408040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.383 [2024-11-28 07:20:44.546111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.383 [2024-11-28 07:20:44.641077] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:22.383 [2024-11-28 07:20:44.641406] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.383 [2024-11-28 07:20:44.641429] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.383 [2024-11-28 07:20:44.641440] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.383 [2024-11-28 07:20:44.641478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.327 07:20:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:23.327 07:20:45 -- common/autotest_common.sh@862 -- # return 0 00:11:23.327 07:20:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:23.327 07:20:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:23.327 07:20:45 -- common/autotest_common.sh@10 -- # set +x 00:11:23.327 07:20:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.327 07:20:45 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:23.327 07:20:45 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:23.327 07:20:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.327 07:20:45 -- common/autotest_common.sh@10 -- # set +x 00:11:23.327 [2024-11-28 07:20:45.535617] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.327 07:20:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.328 07:20:45 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:23.328 07:20:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.328 07:20:45 -- common/autotest_common.sh@10 -- # set +x 00:11:23.328 07:20:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.328 07:20:45 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.328 07:20:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.328 07:20:45 -- common/autotest_common.sh@10 -- # set +x 00:11:23.328 [2024-11-28 07:20:45.555778] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.328 07:20:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.328 07:20:45 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:23.328 07:20:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.328 07:20:45 -- common/autotest_common.sh@10 -- # set +x 00:11:23.328 07:20:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.328 07:20:45 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:23.328 07:20:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.328 07:20:45 -- common/autotest_common.sh@10 -- # set +x 00:11:23.328 malloc0 00:11:23.328 07:20:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.328 07:20:45 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:23.328 07:20:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.328 07:20:45 -- common/autotest_common.sh@10 -- # set +x 00:11:23.586 07:20:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.586 07:20:45 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:23.586 07:20:45 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:23.586 07:20:45 -- nvmf/common.sh@520 -- # config=() 00:11:23.586 07:20:45 -- nvmf/common.sh@520 -- # local subsystem config 00:11:23.586 07:20:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:23.586 07:20:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:23.586 { 00:11:23.586 "params": { 00:11:23.586 "name": "Nvme$subsystem", 00:11:23.586 "trtype": "$TEST_TRANSPORT", 00:11:23.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:23.586 "adrfam": "ipv4", 00:11:23.586 "trsvcid": "$NVMF_PORT", 00:11:23.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:23.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:23.586 "hdgst": ${hdgst:-false}, 00:11:23.586 "ddgst": ${ddgst:-false} 00:11:23.586 }, 00:11:23.586 "method": "bdev_nvme_attach_controller" 00:11:23.586 } 00:11:23.586 EOF 00:11:23.586 )") 00:11:23.586 07:20:45 -- nvmf/common.sh@542 -- # cat 00:11:23.586 07:20:45 -- nvmf/common.sh@544 -- # jq . 00:11:23.586 07:20:45 -- nvmf/common.sh@545 -- # IFS=, 00:11:23.586 07:20:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:23.586 "params": { 00:11:23.586 "name": "Nvme1", 00:11:23.586 "trtype": "tcp", 00:11:23.586 "traddr": "10.0.0.2", 00:11:23.586 "adrfam": "ipv4", 00:11:23.586 "trsvcid": "4420", 00:11:23.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:23.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:23.586 "hdgst": false, 00:11:23.586 "ddgst": false 00:11:23.586 }, 00:11:23.586 "method": "bdev_nvme_attach_controller" 00:11:23.586 }' 00:11:23.586 [2024-11-28 07:20:45.659980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:23.586 [2024-11-28 07:20:45.660513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75114 ] 00:11:23.586 [2024-11-28 07:20:45.803425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.845 [2024-11-28 07:20:45.898139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.845 Running I/O for 10 seconds... 00:11:33.877 00:11:33.877 Latency(us) 00:11:33.877 [2024-11-28T07:20:56.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.877 [2024-11-28T07:20:56.152Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:33.877 Verification LBA range: start 0x0 length 0x1000 00:11:33.877 Nvme1n1 : 10.01 8852.50 69.16 0.00 0.00 14421.78 1400.09 22163.08 00:11:33.877 [2024-11-28T07:20:56.152Z] =================================================================================================================== 00:11:33.877 [2024-11-28T07:20:56.152Z] Total : 8852.50 69.16 0.00 0.00 14421.78 1400.09 22163.08 00:11:34.136 07:20:56 -- target/zcopy.sh@39 -- # perfpid=75231 00:11:34.136 07:20:56 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:34.136 07:20:56 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:34.136 07:20:56 -- target/zcopy.sh@41 -- # xtrace_disable 00:11:34.136 07:20:56 -- common/autotest_common.sh@10 -- # set +x 00:11:34.136 07:20:56 -- nvmf/common.sh@520 -- # config=() 00:11:34.136 07:20:56 -- nvmf/common.sh@520 -- # local subsystem config 00:11:34.136 07:20:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:34.136 07:20:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:34.136 { 00:11:34.136 "params": { 00:11:34.136 "name": "Nvme$subsystem", 00:11:34.136 "trtype": "$TEST_TRANSPORT", 00:11:34.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:34.136 "adrfam": "ipv4", 00:11:34.136 "trsvcid": "$NVMF_PORT", 00:11:34.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:34.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:34.136 "hdgst": ${hdgst:-false}, 00:11:34.136 "ddgst": ${ddgst:-false} 00:11:34.136 }, 00:11:34.136 "method": "bdev_nvme_attach_controller" 00:11:34.136 } 00:11:34.136 EOF 00:11:34.136 )") 00:11:34.136 07:20:56 -- nvmf/common.sh@542 -- # cat 00:11:34.136 [2024-11-28 07:20:56.303280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.136 [2024-11-28 07:20:56.303340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.136 07:20:56 -- nvmf/common.sh@544 -- # jq . 00:11:34.136 07:20:56 -- nvmf/common.sh@545 -- # IFS=, 00:11:34.136 07:20:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:34.136 "params": { 00:11:34.136 "name": "Nvme1", 00:11:34.136 "trtype": "tcp", 00:11:34.136 "traddr": "10.0.0.2", 00:11:34.136 "adrfam": "ipv4", 00:11:34.136 "trsvcid": "4420", 00:11:34.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:34.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:34.136 "hdgst": false, 00:11:34.136 "ddgst": false 00:11:34.136 }, 00:11:34.136 "method": "bdev_nvme_attach_controller" 00:11:34.136 }' 00:11:34.136 [2024-11-28 07:20:56.311241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.136 [2024-11-28 07:20:56.311273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.136 [2024-11-28 07:20:56.323241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.136 [2024-11-28 07:20:56.323274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.136 [2024-11-28 07:20:56.335255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.136 [2024-11-28 07:20:56.335290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.136 [2024-11-28 07:20:56.336361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:34.136 [2024-11-28 07:20:56.336581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75231 ] 00:11:34.136 [2024-11-28 07:20:56.347250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.136 [2024-11-28 07:20:56.347454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.136 [2024-11-28 07:20:56.355257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.136 [2024-11-28 07:20:56.355415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.136 [2024-11-28 07:20:56.367279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.136 [2024-11-28 07:20:56.367475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.136 [2024-11-28 07:20:56.379288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.136 [2024-11-28 07:20:56.379521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.136 [2024-11-28 07:20:56.391269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.136 [2024-11-28 07:20:56.391434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.136 [2024-11-28 07:20:56.399267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.136 [2024-11-28 07:20:56.399426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.136 [2024-11-28 07:20:56.407269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.136 [2024-11-28 07:20:56.407425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.395 [2024-11-28 07:20:56.415269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.395 [2024-11-28 07:20:56.415427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.395 [2024-11-28 07:20:56.423271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.395 [2024-11-28 07:20:56.423427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.395 [2024-11-28 07:20:56.431273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.395 [2024-11-28 07:20:56.431455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.443278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.443436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.451283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.451433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.459286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.459443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.467294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.467444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.471674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.396 [2024-11-28 07:20:56.475300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.475450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.487340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.487590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.499318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.499463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.511325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.511474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.523343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.523540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.535346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.535576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.547336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.547495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.559353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.559520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.564788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.396 [2024-11-28 07:20:56.571342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.571375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.583366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.583408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.595370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.595413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.607384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.607421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.619381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.619421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.631370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.631407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.643379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.643416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.655373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.655417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.396 [2024-11-28 07:20:56.667400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.396 [2024-11-28 07:20:56.667438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.679410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.679443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.691425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.691461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.703445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.703483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.715456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.715493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.727467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.727506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 Running I/O for 5 seconds... 00:11:34.655 [2024-11-28 07:20:56.739468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.739503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.756958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.757150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.773954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.774140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.791204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.791408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.805881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.806088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.821382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.821611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.838767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.839047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.854620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.854875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.871713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.871954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.887437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.887635] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.896854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.897042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.912738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.912986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.655 [2024-11-28 07:20:56.929196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.655 [2024-11-28 07:20:56.929483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.914 [2024-11-28 07:20:56.946378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.914 [2024-11-28 07:20:56.946653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.914 [2024-11-28 07:20:56.962840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.914 [2024-11-28 07:20:56.963029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.914 [2024-11-28 07:20:56.980406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.914 [2024-11-28 07:20:56.980646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:56.995522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:56.995754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.005051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.005097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.016687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.016732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.033319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.033369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.052460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.052505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.067113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.067158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.076505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.076544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.092168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.092388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.102707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.102749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.116854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.116907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.133119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.133169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.150670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.150728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.165127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.165179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.915 [2024-11-28 07:20:57.180923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.915 [2024-11-28 07:20:57.180976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.197836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.197878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.213718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.213758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.232441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.232494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.247122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.247170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.266167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.266222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.280664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.280707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.295976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.296173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.305441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.305479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.318167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.318212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.334585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.334794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.352511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.352559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.367343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.367391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.376951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.377004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.392590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.392644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.410120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.410339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.426249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.426461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.174 [2024-11-28 07:20:57.435914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.174 [2024-11-28 07:20:57.435954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.451556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.451598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.469291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.469343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.483846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.483885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.499577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.499617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.516387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.516426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.533129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.533172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.550287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.550491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.566071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.566249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.584019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.584065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.599555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.599596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.617046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.617221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.632938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.434 [2024-11-28 07:20:57.632983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.434 [2024-11-28 07:20:57.650101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.435 [2024-11-28 07:20:57.650145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.435 [2024-11-28 07:20:57.665894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.435 [2024-11-28 07:20:57.665935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.435 [2024-11-28 07:20:57.683172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.435 [2024-11-28 07:20:57.683218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.435 [2024-11-28 07:20:57.698903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.435 [2024-11-28 07:20:57.698947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.717550] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.717595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.732078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.732275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.741723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.741763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.757262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.757321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.775189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.775385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.790382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.790427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.801978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.802157] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.819382] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.819429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.834231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.834273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.849779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.849828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.868958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.869206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.884017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.884206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.900961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.901007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.917096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.917144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.934233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.934287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.949892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.949943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.694 [2024-11-28 07:20:57.968043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.694 [2024-11-28 07:20:57.968109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:57.982779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:57.982987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:57.993027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:57.993072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.008111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.008160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.025128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.025179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.041690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.041918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.058269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.058332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.075807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.075862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.090347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.090391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.106116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.106170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.123455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.123498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.138736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.138775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.150259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.150298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.166878] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.166920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.183610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.183654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.200048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.200094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.954 [2024-11-28 07:20:58.216402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.954 [2024-11-28 07:20:58.216453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.233680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.233736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.248679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.248737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.258275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.258334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.274711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.274764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.290795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.290857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.309152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.309204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.323531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.323751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.339861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.339907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.358717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.358762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.373159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.373205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.383122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.383163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.398610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.398654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.414967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.415016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.431917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.431960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.449283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.449342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.464681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.464728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.214 [2024-11-28 07:20:58.482307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.214 [2024-11-28 07:20:58.482548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.498167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.498366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.515851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.516048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.531259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.531470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.540844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.541040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.556681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.556884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.573132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.573334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.589403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.589581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.606652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.606841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.620855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.621037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.638443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.638619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.653381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.653562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.663520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.663694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.678935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.679091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.695752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.695918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.712761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.712922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.729501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.729657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.473 [2024-11-28 07:20:58.746875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.473 [2024-11-28 07:20:58.747049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.762743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.762906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.779953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.780120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.796681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.796837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.812995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.813148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.830287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.830456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.845603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.845751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.862909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.863063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.874044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.874194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.892173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.892378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.908987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.909028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.924151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.924336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.933754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.933794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.948950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.948990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.966622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.966667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.981955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.981995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.732 [2024-11-28 07:20:58.991085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.732 [2024-11-28 07:20:58.991124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.007539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.007725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.025103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.025145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.040044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.040089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.049312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.049379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.065524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.065564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.082440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.082483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.099753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.099796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.115471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.115515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.133532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.133578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.148523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.148582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.158489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.158529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.173397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.173441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.192566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.192614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.207444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.207665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.226229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.226276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.241089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.241130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.991 [2024-11-28 07:20:59.250901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.991 [2024-11-28 07:20:59.251075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.266298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.266361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.283810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.283860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.299089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.299141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.317900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.318122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.332937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.333130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.348576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.348759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.367617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.367817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.382650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.382814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.400531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.400704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.415457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.415621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.425169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.425339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.441379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.441554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.458799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.458969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.475352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.475522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.491700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.491904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.250 [2024-11-28 07:20:59.510669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.250 [2024-11-28 07:20:59.510873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.525757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.525944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.543056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.543283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.559181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.559401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.576883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.577091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.591846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.592060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.601958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.602150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.617042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.617218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.627301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.627478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.643133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.643301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.659824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.660002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.677000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.677186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.692705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.692882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.711435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.711597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.726298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.726474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.736270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.736452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.751552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.751713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.509 [2024-11-28 07:20:59.768971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.509 [2024-11-28 07:20:59.769138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.784399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.784601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.801661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.801869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.817606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.817793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.827050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.827205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.843881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.843920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.861564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.861603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.876146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.876199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.891842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.891882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.910299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.910347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.921065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.921229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.934250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.934303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.952334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.952551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.967828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.967986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:20:59.985715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:20:59.985757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:21:00.001664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:21:00.001703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:21:00.019645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:21:00.019711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:37.769 [2024-11-28 07:21:00.035237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:37.769 [2024-11-28 07:21:00.035487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.051623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.051684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.069942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.069981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.084940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.084981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.102411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.102460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.118953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.118994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.135376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.135436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.152089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.152133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.169067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.169112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.185937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.186107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.202412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.202451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.219522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.219561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.233656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.233713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.251710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.251754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.266530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.266573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.276199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.276239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.028 [2024-11-28 07:21:00.291490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.028 [2024-11-28 07:21:00.291530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.307441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.307485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.326067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.326107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.341500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.341538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.359141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.359181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.375508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.375547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.392001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.392044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.408413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.408452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.425040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.425080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.442536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.442592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.457790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.457861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.475781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.475968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.491034] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.491197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.509259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.509296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.524888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.524928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.543369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.543458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.288 [2024-11-28 07:21:00.558418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.288 [2024-11-28 07:21:00.558486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.577410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.577465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.591996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.592035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.608079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.608164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.623745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.623784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.633409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.633446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.647087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.647126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.662689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.662742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.680718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.680914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.695844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.696016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.705928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.705970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.722163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.722250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.738593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.738632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.754916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.754962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.773312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.773398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.788480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.788519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.797625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.797865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.548 [2024-11-28 07:21:00.813910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.548 [2024-11-28 07:21:00.813951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:00.830611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:00.830648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:00.848779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:00.848974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:00.864097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:00.864297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:00.873988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:00.874027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:00.889068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:00.889109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:00.905876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:00.905916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:00.924074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:00.924114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:00.939484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:00.939520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:00.957310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:00.957377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:00.972052] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:00.972091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:00.987566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:00.987606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:01.003893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:01.003933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:01.021093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:01.021133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:01.037099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:01.037139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:01.053882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:01.053922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.807 [2024-11-28 07:21:01.069958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:38.807 [2024-11-28 07:21:01.069998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.084 [2024-11-28 07:21:01.087169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.084 [2024-11-28 07:21:01.087344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.084 [2024-11-28 07:21:01.102643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.084 [2024-11-28 07:21:01.102831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.084 [2024-11-28 07:21:01.120292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.084 [2024-11-28 07:21:01.120361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.084 [2024-11-28 07:21:01.137990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.084 [2024-11-28 07:21:01.138198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.084 [2024-11-28 07:21:01.153797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.084 [2024-11-28 07:21:01.153854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.084 [2024-11-28 07:21:01.173797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.084 [2024-11-28 07:21:01.173870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.084 [2024-11-28 07:21:01.189053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.084 [2024-11-28 07:21:01.189223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.084 [2024-11-28 07:21:01.206942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.084 [2024-11-28 07:21:01.206988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.084 [2024-11-28 07:21:01.222006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.084 [2024-11-28 07:21:01.222047] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.085 [2024-11-28 07:21:01.239530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.085 [2024-11-28 07:21:01.239566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.085 [2024-11-28 07:21:01.254972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.085 [2024-11-28 07:21:01.255135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.085 [2024-11-28 07:21:01.273000] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.085 [2024-11-28 07:21:01.273040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.085 [2024-11-28 07:21:01.288779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.085 [2024-11-28 07:21:01.288834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.085 [2024-11-28 07:21:01.306516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.085 [2024-11-28 07:21:01.306557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.085 [2024-11-28 07:21:01.321250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.085 [2024-11-28 07:21:01.321291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.085 [2024-11-28 07:21:01.337363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.085 [2024-11-28 07:21:01.337413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.085 [2024-11-28 07:21:01.353744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.085 [2024-11-28 07:21:01.353909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.369831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.369871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.389227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.389412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.403717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.403758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.412778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.412942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.424761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.424801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.439510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.439693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.449671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.449709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.461172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.461212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.478974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.479137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.495414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.495453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.513359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.513401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.528122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.528180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.538035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.538075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.552850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.553017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.569807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.569973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.586087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.586247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.603421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.603579] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.619262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.619456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.637059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.637226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.651763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.651926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.667607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.667788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.686772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.686932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.434 [2024-11-28 07:21:01.701450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.434 [2024-11-28 07:21:01.701607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.710 [2024-11-28 07:21:01.713587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.710 [2024-11-28 07:21:01.713743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.710 [2024-11-28 07:21:01.728910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.710 [2024-11-28 07:21:01.729068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.710 [2024-11-28 07:21:01.737604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.710 [2024-11-28 07:21:01.737758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.710 00:11:39.710 Latency(us) 00:11:39.710 [2024-11-28T07:21:01.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:39.710 [2024-11-28T07:21:01.985Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:39.711 Nvme1n1 : 5.00 11620.36 90.78 0.00 0.00 11003.23 4349.21 21209.83 00:11:39.711 [2024-11-28T07:21:01.986Z] =================================================================================================================== 00:11:39.711 [2024-11-28T07:21:01.986Z] Total : 11620.36 90.78 0.00 0.00 11003.23 4349.21 21209.83 00:11:39.711 [2024-11-28 07:21:01.745488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.745646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.757499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.757681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.769523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.769811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.781531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.781815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.793531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.793804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.805539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.805806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.817542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.817586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.829537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.829581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.841539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.841584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.853543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.853587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.865545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.865589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.877537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.877575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.889548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.889590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.901553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.901594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.913553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.913591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.925569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.925612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.937556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.937593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 [2024-11-28 07:21:01.949552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:39.711 [2024-11-28 07:21:01.949584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:39.711 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75231) - No such process 00:11:39.711 07:21:01 -- target/zcopy.sh@49 -- # wait 75231 00:11:39.711 07:21:01 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.711 07:21:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.711 07:21:01 -- common/autotest_common.sh@10 -- # set +x 00:11:39.711 07:21:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.711 07:21:01 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:39.711 07:21:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.711 07:21:01 -- common/autotest_common.sh@10 -- # set +x 00:11:39.711 delay0 00:11:39.711 07:21:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.711 07:21:01 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:39.711 07:21:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.711 07:21:01 -- common/autotest_common.sh@10 -- # set +x 00:11:39.711 07:21:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.711 07:21:01 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:39.968 [2024-11-28 07:21:02.138319] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:46.532 Initializing NVMe Controllers 00:11:46.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:46.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:46.532 Initialization complete. Launching workers. 00:11:46.532 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 89 00:11:46.532 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 376, failed to submit 33 00:11:46.532 success 259, unsuccess 117, failed 0 00:11:46.532 07:21:08 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:46.532 07:21:08 -- target/zcopy.sh@60 -- # nvmftestfini 00:11:46.532 07:21:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:46.532 07:21:08 -- nvmf/common.sh@116 -- # sync 00:11:46.532 07:21:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:46.532 07:21:08 -- nvmf/common.sh@119 -- # set +e 00:11:46.532 07:21:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:46.532 07:21:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:46.532 rmmod nvme_tcp 00:11:46.532 rmmod nvme_fabrics 00:11:46.532 rmmod nvme_keyring 00:11:46.532 07:21:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:46.532 07:21:08 -- nvmf/common.sh@123 -- # set -e 00:11:46.532 07:21:08 -- nvmf/common.sh@124 -- # return 0 00:11:46.532 07:21:08 -- nvmf/common.sh@477 -- # '[' -n 75075 ']' 00:11:46.532 07:21:08 -- nvmf/common.sh@478 -- # killprocess 75075 00:11:46.532 07:21:08 -- common/autotest_common.sh@936 -- # '[' -z 75075 ']' 00:11:46.532 07:21:08 -- common/autotest_common.sh@940 -- # kill -0 75075 00:11:46.532 07:21:08 -- common/autotest_common.sh@941 -- # uname 00:11:46.532 07:21:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:46.532 07:21:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75075 00:11:46.532 killing process with pid 75075 00:11:46.532 07:21:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:46.532 07:21:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:46.532 07:21:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75075' 00:11:46.532 07:21:08 -- common/autotest_common.sh@955 -- # kill 75075 00:11:46.532 07:21:08 -- common/autotest_common.sh@960 -- # wait 75075 00:11:46.532 07:21:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:46.532 07:21:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:46.532 07:21:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:46.532 07:21:08 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.532 07:21:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:46.532 07:21:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.532 07:21:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.532 07:21:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.532 07:21:08 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:46.532 00:11:46.532 real 0m24.798s 00:11:46.532 user 0m40.512s 00:11:46.532 sys 0m6.747s 00:11:46.532 07:21:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:46.532 07:21:08 -- common/autotest_common.sh@10 -- # set +x 00:11:46.532 ************************************ 00:11:46.532 END TEST nvmf_zcopy 00:11:46.532 ************************************ 00:11:46.532 07:21:08 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:46.532 07:21:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:46.532 07:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.532 07:21:08 -- common/autotest_common.sh@10 -- # set +x 00:11:46.532 ************************************ 00:11:46.532 START TEST nvmf_nmic 00:11:46.532 ************************************ 00:11:46.532 07:21:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:46.532 * Looking for test storage... 00:11:46.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:46.532 07:21:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:46.532 07:21:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:46.532 07:21:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:46.532 07:21:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:46.532 07:21:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:46.532 07:21:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:46.532 07:21:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:46.532 07:21:08 -- scripts/common.sh@335 -- # IFS=.-: 00:11:46.532 07:21:08 -- scripts/common.sh@335 -- # read -ra ver1 00:11:46.532 07:21:08 -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.532 07:21:08 -- scripts/common.sh@336 -- # read -ra ver2 00:11:46.532 07:21:08 -- scripts/common.sh@337 -- # local 'op=<' 00:11:46.532 07:21:08 -- scripts/common.sh@339 -- # ver1_l=2 00:11:46.532 07:21:08 -- scripts/common.sh@340 -- # ver2_l=1 00:11:46.532 07:21:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:46.533 07:21:08 -- scripts/common.sh@343 -- # case "$op" in 00:11:46.533 07:21:08 -- scripts/common.sh@344 -- # : 1 00:11:46.533 07:21:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:46.533 07:21:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.533 07:21:08 -- scripts/common.sh@364 -- # decimal 1 00:11:46.533 07:21:08 -- scripts/common.sh@352 -- # local d=1 00:11:46.533 07:21:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.533 07:21:08 -- scripts/common.sh@354 -- # echo 1 00:11:46.533 07:21:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:46.533 07:21:08 -- scripts/common.sh@365 -- # decimal 2 00:11:46.533 07:21:08 -- scripts/common.sh@352 -- # local d=2 00:11:46.533 07:21:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.533 07:21:08 -- scripts/common.sh@354 -- # echo 2 00:11:46.533 07:21:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:46.533 07:21:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:46.533 07:21:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:46.533 07:21:08 -- scripts/common.sh@367 -- # return 0 00:11:46.533 07:21:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.533 07:21:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:46.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.533 --rc genhtml_branch_coverage=1 00:11:46.533 --rc genhtml_function_coverage=1 00:11:46.533 --rc genhtml_legend=1 00:11:46.533 --rc geninfo_all_blocks=1 00:11:46.533 --rc geninfo_unexecuted_blocks=1 00:11:46.533 00:11:46.533 ' 00:11:46.533 07:21:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:46.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.533 --rc genhtml_branch_coverage=1 00:11:46.533 --rc genhtml_function_coverage=1 00:11:46.533 --rc genhtml_legend=1 00:11:46.533 --rc geninfo_all_blocks=1 00:11:46.533 --rc geninfo_unexecuted_blocks=1 00:11:46.533 00:11:46.533 ' 00:11:46.533 07:21:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:46.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.533 --rc genhtml_branch_coverage=1 00:11:46.533 --rc genhtml_function_coverage=1 00:11:46.533 --rc genhtml_legend=1 00:11:46.533 --rc geninfo_all_blocks=1 00:11:46.533 --rc geninfo_unexecuted_blocks=1 00:11:46.533 00:11:46.533 ' 00:11:46.533 07:21:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:46.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.533 --rc genhtml_branch_coverage=1 00:11:46.533 --rc genhtml_function_coverage=1 00:11:46.533 --rc genhtml_legend=1 00:11:46.533 --rc geninfo_all_blocks=1 00:11:46.533 --rc geninfo_unexecuted_blocks=1 00:11:46.533 00:11:46.533 ' 00:11:46.533 07:21:08 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:46.533 07:21:08 -- nvmf/common.sh@7 -- # uname -s 00:11:46.533 07:21:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.533 07:21:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.533 07:21:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.533 07:21:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.533 07:21:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.533 07:21:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.533 07:21:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.533 07:21:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.533 07:21:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.533 07:21:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.533 07:21:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:11:46.533 07:21:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:11:46.533 07:21:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.533 07:21:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.533 07:21:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:46.533 07:21:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:46.533 07:21:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.533 07:21:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.533 07:21:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.533 07:21:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.533 07:21:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.533 07:21:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.533 07:21:08 -- paths/export.sh@5 -- # export PATH 00:11:46.533 07:21:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.533 07:21:08 -- nvmf/common.sh@46 -- # : 0 00:11:46.533 07:21:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:46.533 07:21:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:46.533 07:21:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:46.533 07:21:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.533 07:21:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.533 07:21:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:46.533 07:21:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:46.533 07:21:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:46.533 07:21:08 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:46.533 07:21:08 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:46.533 07:21:08 -- target/nmic.sh@14 -- # nvmftestinit 00:11:46.533 07:21:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:46.533 07:21:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.533 07:21:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:46.533 07:21:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:46.533 07:21:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:46.533 07:21:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.533 07:21:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.533 07:21:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.793 07:21:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:46.793 07:21:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:46.793 07:21:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:46.793 07:21:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:46.793 07:21:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:46.793 07:21:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:46.793 07:21:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.793 07:21:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.793 07:21:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:46.793 07:21:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:46.793 07:21:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:46.793 07:21:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:46.793 07:21:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:46.793 07:21:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.793 07:21:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:46.793 07:21:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:46.793 07:21:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:46.793 07:21:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:46.793 07:21:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:46.793 07:21:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:46.793 Cannot find device "nvmf_tgt_br" 00:11:46.793 07:21:08 -- nvmf/common.sh@154 -- # true 00:11:46.793 07:21:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.793 Cannot find device "nvmf_tgt_br2" 00:11:46.793 07:21:08 -- nvmf/common.sh@155 -- # true 00:11:46.793 07:21:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:46.793 07:21:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:46.793 Cannot find device "nvmf_tgt_br" 00:11:46.793 07:21:08 -- nvmf/common.sh@157 -- # true 00:11:46.793 07:21:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:46.793 Cannot find device "nvmf_tgt_br2" 00:11:46.793 07:21:08 -- nvmf/common.sh@158 -- # true 00:11:46.793 07:21:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:46.793 07:21:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:46.793 07:21:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.793 07:21:08 -- nvmf/common.sh@161 -- # true 00:11:46.793 07:21:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.793 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:46.793 07:21:08 -- nvmf/common.sh@162 -- # true 00:11:46.793 07:21:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:46.793 07:21:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:46.793 07:21:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:46.793 07:21:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:46.793 07:21:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:46.793 07:21:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:46.793 07:21:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:46.793 07:21:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:46.793 07:21:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:46.793 07:21:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:46.793 07:21:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:46.793 07:21:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:46.793 07:21:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:46.793 07:21:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:46.793 07:21:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:46.793 07:21:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:46.793 07:21:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:46.793 07:21:09 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:46.793 07:21:09 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.053 07:21:09 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.053 07:21:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.053 07:21:09 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:47.053 07:21:09 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.053 07:21:09 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:47.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:47.053 00:11:47.053 --- 10.0.0.2 ping statistics --- 00:11:47.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.053 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:47.053 07:21:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:47.053 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.053 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:11:47.053 00:11:47.053 --- 10.0.0.3 ping statistics --- 00:11:47.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.053 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:47.053 07:21:09 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:47.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:47.053 00:11:47.053 --- 10.0.0.1 ping statistics --- 00:11:47.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.053 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:47.053 07:21:09 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.053 07:21:09 -- nvmf/common.sh@421 -- # return 0 00:11:47.053 07:21:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:47.053 07:21:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.053 07:21:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:47.053 07:21:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:47.053 07:21:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.053 07:21:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:47.053 07:21:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:47.053 07:21:09 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:47.053 07:21:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:47.053 07:21:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:47.053 07:21:09 -- common/autotest_common.sh@10 -- # set +x 00:11:47.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.053 07:21:09 -- nvmf/common.sh@469 -- # nvmfpid=75564 00:11:47.053 07:21:09 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.053 07:21:09 -- nvmf/common.sh@470 -- # waitforlisten 75564 00:11:47.053 07:21:09 -- common/autotest_common.sh@829 -- # '[' -z 75564 ']' 00:11:47.053 07:21:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.053 07:21:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.053 07:21:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.053 07:21:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.053 07:21:09 -- common/autotest_common.sh@10 -- # set +x 00:11:47.053 [2024-11-28 07:21:09.190867] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:47.053 [2024-11-28 07:21:09.191115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.313 [2024-11-28 07:21:09.329987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.313 [2024-11-28 07:21:09.428076] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:47.313 [2024-11-28 07:21:09.428489] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.313 [2024-11-28 07:21:09.428641] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.313 [2024-11-28 07:21:09.428770] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.313 [2024-11-28 07:21:09.428968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.313 [2024-11-28 07:21:09.429082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.313 [2024-11-28 07:21:09.429392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.313 [2024-11-28 07:21:09.429395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.250 07:21:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.251 07:21:10 -- common/autotest_common.sh@862 -- # return 0 00:11:48.251 07:21:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:48.251 07:21:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.251 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:11:48.251 07:21:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.251 07:21:10 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.251 07:21:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.251 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:11:48.251 [2024-11-28 07:21:10.278370] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.251 07:21:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.251 07:21:10 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:48.251 07:21:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.251 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:11:48.251 Malloc0 00:11:48.251 07:21:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.251 07:21:10 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:48.251 07:21:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.251 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:11:48.251 07:21:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.251 07:21:10 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.251 07:21:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.251 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:11:48.251 07:21:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.251 07:21:10 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.251 07:21:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.251 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:11:48.251 [2024-11-28 07:21:10.346772] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.251 07:21:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.251 07:21:10 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:48.251 test case1: single bdev can't be used in multiple subsystems 00:11:48.251 07:21:10 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:48.251 07:21:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.251 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:11:48.251 07:21:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.251 07:21:10 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:48.251 07:21:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.251 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:11:48.251 07:21:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.251 07:21:10 -- target/nmic.sh@28 -- # nmic_status=0 00:11:48.251 07:21:10 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:48.251 07:21:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.251 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:11:48.251 [2024-11-28 07:21:10.370592] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:48.251 [2024-11-28 07:21:10.370839] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:48.251 [2024-11-28 07:21:10.370864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:48.251 request: 00:11:48.251 { 00:11:48.251 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:48.251 "namespace": { 00:11:48.251 "bdev_name": "Malloc0" 00:11:48.251 }, 00:11:48.251 "method": "nvmf_subsystem_add_ns", 00:11:48.251 "req_id": 1 00:11:48.251 } 00:11:48.251 Got JSON-RPC error response 00:11:48.251 response: 00:11:48.251 { 00:11:48.251 "code": -32602, 00:11:48.251 "message": "Invalid parameters" 00:11:48.251 } 00:11:48.251 Adding namespace failed - expected result. 00:11:48.251 test case2: host connect to nvmf target in multiple paths 00:11:48.251 07:21:10 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:48.251 07:21:10 -- target/nmic.sh@29 -- # nmic_status=1 00:11:48.251 07:21:10 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:48.251 07:21:10 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:48.251 07:21:10 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:48.251 07:21:10 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:48.251 07:21:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.251 07:21:10 -- common/autotest_common.sh@10 -- # set +x 00:11:48.251 [2024-11-28 07:21:10.382784] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:48.251 07:21:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.251 07:21:10 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.251 07:21:10 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:48.510 07:21:10 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.510 07:21:10 -- common/autotest_common.sh@1187 -- # local i=0 00:11:48.510 07:21:10 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.510 07:21:10 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:48.510 07:21:10 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:50.414 07:21:12 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:50.414 07:21:12 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:50.414 07:21:12 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.414 07:21:12 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:50.414 07:21:12 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.414 07:21:12 -- common/autotest_common.sh@1197 -- # return 0 00:11:50.414 07:21:12 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:50.414 [global] 00:11:50.414 thread=1 00:11:50.414 invalidate=1 00:11:50.414 rw=write 00:11:50.414 time_based=1 00:11:50.414 runtime=1 00:11:50.414 ioengine=libaio 00:11:50.414 direct=1 00:11:50.415 bs=4096 00:11:50.415 iodepth=1 00:11:50.415 norandommap=0 00:11:50.415 numjobs=1 00:11:50.415 00:11:50.673 verify_dump=1 00:11:50.673 verify_backlog=512 00:11:50.673 verify_state_save=0 00:11:50.673 do_verify=1 00:11:50.673 verify=crc32c-intel 00:11:50.673 [job0] 00:11:50.673 filename=/dev/nvme0n1 00:11:50.673 Could not set queue depth (nvme0n1) 00:11:50.673 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.673 fio-3.35 00:11:50.673 Starting 1 thread 00:11:52.048 00:11:52.048 job0: (groupid=0, jobs=1): err= 0: pid=75650: Thu Nov 28 07:21:13 2024 00:11:52.048 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1001msec) 00:11:52.048 slat (nsec): min=12053, max=54084, avg=13709.95, stdev=2380.36 00:11:52.048 clat (usec): min=141, max=710, avg=177.12, stdev=18.72 00:11:52.048 lat (usec): min=153, max=726, avg=190.83, stdev=18.86 00:11:52.048 clat percentiles (usec): 00:11:52.048 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:11:52.048 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:11:52.048 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 200], 00:11:52.048 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 229], 99.95th=[ 553], 00:11:52.048 | 99.99th=[ 709] 00:11:52.048 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:52.049 slat (usec): min=17, max=1675, avg=21.79, stdev=30.44 00:11:52.049 clat (usec): min=87, max=2732, avg=111.90, stdev=65.69 00:11:52.049 lat (usec): min=106, max=2760, avg=133.69, stdev=73.25 00:11:52.049 clat percentiles (usec): 00:11:52.049 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 99], 00:11:52.049 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:11:52.049 | 70.00th=[ 114], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 131], 00:11:52.049 | 99.00th=[ 155], 99.50th=[ 253], 99.90th=[ 717], 99.95th=[ 2114], 00:11:52.049 | 99.99th=[ 2737] 00:11:52.049 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:11:52.049 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:11:52.049 lat (usec) : 100=11.00%, 250=88.70%, 500=0.20%, 750=0.05% 00:11:52.049 lat (msec) : 2=0.02%, 4=0.03% 00:11:52.049 cpu : usr=1.80%, sys=8.70%, ctx=6109, majf=0, minf=5 00:11:52.049 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.049 issued rwts: total=3036,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.049 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.049 00:11:52.049 Run status group 0 (all jobs): 00:11:52.049 READ: bw=11.8MiB/s (12.4MB/s), 11.8MiB/s-11.8MiB/s (12.4MB/s-12.4MB/s), io=11.9MiB (12.4MB), run=1001-1001msec 00:11:52.049 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:52.049 00:11:52.049 Disk stats (read/write): 00:11:52.049 nvme0n1: ios=2610/2990, merge=0/0, ticks=476/351, in_queue=827, util=91.08% 00:11:52.049 07:21:13 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:52.049 07:21:14 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.049 07:21:14 -- common/autotest_common.sh@1208 -- # local i=0 00:11:52.049 07:21:14 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:52.049 07:21:14 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.049 07:21:14 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:52.049 07:21:14 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.049 07:21:14 -- common/autotest_common.sh@1220 -- # return 0 00:11:52.049 07:21:14 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:52.049 07:21:14 -- target/nmic.sh@53 -- # nvmftestfini 00:11:52.049 07:21:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:52.049 07:21:14 -- nvmf/common.sh@116 -- # sync 00:11:52.049 07:21:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:52.049 07:21:14 -- nvmf/common.sh@119 -- # set +e 00:11:52.049 07:21:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:52.049 07:21:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:52.049 rmmod nvme_tcp 00:11:52.049 rmmod nvme_fabrics 00:11:52.049 rmmod nvme_keyring 00:11:52.049 07:21:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:52.049 07:21:14 -- nvmf/common.sh@123 -- # set -e 00:11:52.049 07:21:14 -- nvmf/common.sh@124 -- # return 0 00:11:52.049 07:21:14 -- nvmf/common.sh@477 -- # '[' -n 75564 ']' 00:11:52.049 07:21:14 -- nvmf/common.sh@478 -- # killprocess 75564 00:11:52.049 07:21:14 -- common/autotest_common.sh@936 -- # '[' -z 75564 ']' 00:11:52.049 07:21:14 -- common/autotest_common.sh@940 -- # kill -0 75564 00:11:52.049 07:21:14 -- common/autotest_common.sh@941 -- # uname 00:11:52.049 07:21:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:52.049 07:21:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75564 00:11:52.049 killing process with pid 75564 00:11:52.049 07:21:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:52.049 07:21:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:52.049 07:21:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75564' 00:11:52.049 07:21:14 -- common/autotest_common.sh@955 -- # kill 75564 00:11:52.049 07:21:14 -- common/autotest_common.sh@960 -- # wait 75564 00:11:52.307 07:21:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:52.307 07:21:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:52.307 07:21:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:52.307 07:21:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:52.307 07:21:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:52.307 07:21:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.307 07:21:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.307 07:21:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.307 07:21:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:52.307 ************************************ 00:11:52.307 END TEST nvmf_nmic 00:11:52.307 ************************************ 00:11:52.307 00:11:52.307 real 0m5.852s 00:11:52.307 user 0m18.746s 00:11:52.307 sys 0m2.282s 00:11:52.307 07:21:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:52.307 07:21:14 -- common/autotest_common.sh@10 -- # set +x 00:11:52.307 07:21:14 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:52.307 07:21:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:52.307 07:21:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:52.307 07:21:14 -- common/autotest_common.sh@10 -- # set +x 00:11:52.307 ************************************ 00:11:52.307 START TEST nvmf_fio_target 00:11:52.307 ************************************ 00:11:52.307 07:21:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:52.566 * Looking for test storage... 00:11:52.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:52.566 07:21:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:52.566 07:21:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:52.566 07:21:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:52.566 07:21:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:52.566 07:21:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:52.566 07:21:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:52.566 07:21:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:52.566 07:21:14 -- scripts/common.sh@335 -- # IFS=.-: 00:11:52.566 07:21:14 -- scripts/common.sh@335 -- # read -ra ver1 00:11:52.566 07:21:14 -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.566 07:21:14 -- scripts/common.sh@336 -- # read -ra ver2 00:11:52.566 07:21:14 -- scripts/common.sh@337 -- # local 'op=<' 00:11:52.566 07:21:14 -- scripts/common.sh@339 -- # ver1_l=2 00:11:52.566 07:21:14 -- scripts/common.sh@340 -- # ver2_l=1 00:11:52.566 07:21:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:52.566 07:21:14 -- scripts/common.sh@343 -- # case "$op" in 00:11:52.566 07:21:14 -- scripts/common.sh@344 -- # : 1 00:11:52.566 07:21:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:52.566 07:21:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.566 07:21:14 -- scripts/common.sh@364 -- # decimal 1 00:11:52.566 07:21:14 -- scripts/common.sh@352 -- # local d=1 00:11:52.566 07:21:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.566 07:21:14 -- scripts/common.sh@354 -- # echo 1 00:11:52.566 07:21:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:52.566 07:21:14 -- scripts/common.sh@365 -- # decimal 2 00:11:52.566 07:21:14 -- scripts/common.sh@352 -- # local d=2 00:11:52.566 07:21:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.566 07:21:14 -- scripts/common.sh@354 -- # echo 2 00:11:52.566 07:21:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:52.566 07:21:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:52.566 07:21:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:52.566 07:21:14 -- scripts/common.sh@367 -- # return 0 00:11:52.566 07:21:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.566 07:21:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:52.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.566 --rc genhtml_branch_coverage=1 00:11:52.566 --rc genhtml_function_coverage=1 00:11:52.566 --rc genhtml_legend=1 00:11:52.566 --rc geninfo_all_blocks=1 00:11:52.566 --rc geninfo_unexecuted_blocks=1 00:11:52.566 00:11:52.566 ' 00:11:52.566 07:21:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:52.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.566 --rc genhtml_branch_coverage=1 00:11:52.566 --rc genhtml_function_coverage=1 00:11:52.566 --rc genhtml_legend=1 00:11:52.566 --rc geninfo_all_blocks=1 00:11:52.566 --rc geninfo_unexecuted_blocks=1 00:11:52.566 00:11:52.566 ' 00:11:52.566 07:21:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:52.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.566 --rc genhtml_branch_coverage=1 00:11:52.566 --rc genhtml_function_coverage=1 00:11:52.566 --rc genhtml_legend=1 00:11:52.566 --rc geninfo_all_blocks=1 00:11:52.566 --rc geninfo_unexecuted_blocks=1 00:11:52.566 00:11:52.566 ' 00:11:52.566 07:21:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:52.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.566 --rc genhtml_branch_coverage=1 00:11:52.566 --rc genhtml_function_coverage=1 00:11:52.566 --rc genhtml_legend=1 00:11:52.566 --rc geninfo_all_blocks=1 00:11:52.566 --rc geninfo_unexecuted_blocks=1 00:11:52.566 00:11:52.566 ' 00:11:52.566 07:21:14 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:52.566 07:21:14 -- nvmf/common.sh@7 -- # uname -s 00:11:52.566 07:21:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.566 07:21:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.566 07:21:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.566 07:21:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.566 07:21:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.566 07:21:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.566 07:21:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.566 07:21:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.566 07:21:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.566 07:21:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.566 07:21:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:11:52.566 07:21:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:11:52.566 07:21:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.566 07:21:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.566 07:21:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:52.566 07:21:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:52.566 07:21:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.566 07:21:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.566 07:21:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.566 07:21:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.566 07:21:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.566 07:21:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.566 07:21:14 -- paths/export.sh@5 -- # export PATH 00:11:52.566 07:21:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.566 07:21:14 -- nvmf/common.sh@46 -- # : 0 00:11:52.566 07:21:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:52.566 07:21:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:52.566 07:21:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:52.566 07:21:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.566 07:21:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.566 07:21:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:52.566 07:21:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:52.566 07:21:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:52.566 07:21:14 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:52.566 07:21:14 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:52.566 07:21:14 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:52.566 07:21:14 -- target/fio.sh@16 -- # nvmftestinit 00:11:52.566 07:21:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:52.566 07:21:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.566 07:21:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:52.566 07:21:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:52.566 07:21:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:52.566 07:21:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.566 07:21:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:52.566 07:21:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.566 07:21:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:52.566 07:21:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:52.566 07:21:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:52.566 07:21:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:52.566 07:21:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:52.566 07:21:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:52.566 07:21:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.566 07:21:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.566 07:21:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:52.566 07:21:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:52.566 07:21:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:52.566 07:21:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:52.566 07:21:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:52.566 07:21:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.566 07:21:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:52.566 07:21:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:52.566 07:21:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:52.566 07:21:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:52.566 07:21:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:52.566 07:21:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:52.566 Cannot find device "nvmf_tgt_br" 00:11:52.566 07:21:14 -- nvmf/common.sh@154 -- # true 00:11:52.566 07:21:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:52.566 Cannot find device "nvmf_tgt_br2" 00:11:52.566 07:21:14 -- nvmf/common.sh@155 -- # true 00:11:52.566 07:21:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:52.566 07:21:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:52.566 Cannot find device "nvmf_tgt_br" 00:11:52.566 07:21:14 -- nvmf/common.sh@157 -- # true 00:11:52.566 07:21:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:52.566 Cannot find device "nvmf_tgt_br2" 00:11:52.566 07:21:14 -- nvmf/common.sh@158 -- # true 00:11:52.566 07:21:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:52.566 07:21:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:52.825 07:21:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:52.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.825 07:21:14 -- nvmf/common.sh@161 -- # true 00:11:52.825 07:21:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:52.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:52.825 07:21:14 -- nvmf/common.sh@162 -- # true 00:11:52.825 07:21:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:52.825 07:21:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:52.825 07:21:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:52.825 07:21:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:52.825 07:21:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:52.825 07:21:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:52.825 07:21:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:52.825 07:21:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:52.825 07:21:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:52.825 07:21:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:52.825 07:21:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:52.825 07:21:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:52.825 07:21:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:52.825 07:21:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:52.825 07:21:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:52.825 07:21:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:52.825 07:21:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:52.825 07:21:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:52.825 07:21:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:52.825 07:21:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:52.825 07:21:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:52.825 07:21:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:52.825 07:21:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:52.825 07:21:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:52.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:11:52.825 00:11:52.825 --- 10.0.0.2 ping statistics --- 00:11:52.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.825 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:52.825 07:21:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:52.825 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:52.825 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:52.825 00:11:52.825 --- 10.0.0.3 ping statistics --- 00:11:52.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.825 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:52.825 07:21:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:52.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:11:52.825 00:11:52.825 --- 10.0.0.1 ping statistics --- 00:11:52.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.825 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:11:52.825 07:21:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.825 07:21:15 -- nvmf/common.sh@421 -- # return 0 00:11:52.825 07:21:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:52.825 07:21:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.825 07:21:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:52.825 07:21:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:52.825 07:21:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.825 07:21:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:52.825 07:21:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:52.825 07:21:15 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:52.825 07:21:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:52.825 07:21:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:52.825 07:21:15 -- common/autotest_common.sh@10 -- # set +x 00:11:52.825 07:21:15 -- nvmf/common.sh@469 -- # nvmfpid=75840 00:11:52.825 07:21:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.825 07:21:15 -- nvmf/common.sh@470 -- # waitforlisten 75840 00:11:52.825 07:21:15 -- common/autotest_common.sh@829 -- # '[' -z 75840 ']' 00:11:52.825 07:21:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.825 07:21:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.825 07:21:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.825 07:21:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.825 07:21:15 -- common/autotest_common.sh@10 -- # set +x 00:11:53.083 [2024-11-28 07:21:15.126190] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:53.083 [2024-11-28 07:21:15.126338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.083 [2024-11-28 07:21:15.265444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.342 [2024-11-28 07:21:15.361747] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:53.342 [2024-11-28 07:21:15.362141] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.342 [2024-11-28 07:21:15.362264] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.342 [2024-11-28 07:21:15.362418] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.342 [2024-11-28 07:21:15.362617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.342 [2024-11-28 07:21:15.362697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.342 [2024-11-28 07:21:15.362772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.342 [2024-11-28 07:21:15.362767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.909 07:21:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.909 07:21:16 -- common/autotest_common.sh@862 -- # return 0 00:11:53.909 07:21:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:53.909 07:21:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:53.909 07:21:16 -- common/autotest_common.sh@10 -- # set +x 00:11:53.909 07:21:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.909 07:21:16 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:54.169 [2024-11-28 07:21:16.428526] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.428 07:21:16 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:54.687 07:21:16 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:54.687 07:21:16 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:54.946 07:21:16 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:54.946 07:21:16 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:55.205 07:21:17 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:55.205 07:21:17 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:55.464 07:21:17 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:55.464 07:21:17 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:55.723 07:21:17 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:55.982 07:21:18 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:55.982 07:21:18 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.240 07:21:18 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:56.240 07:21:18 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.512 07:21:18 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:56.512 07:21:18 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:56.771 07:21:18 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.029 07:21:19 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:57.029 07:21:19 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:57.288 07:21:19 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:57.288 07:21:19 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.546 07:21:19 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.805 [2024-11-28 07:21:19.985290] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.805 07:21:20 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:58.064 07:21:20 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:58.323 07:21:20 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.581 07:21:20 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:58.581 07:21:20 -- common/autotest_common.sh@1187 -- # local i=0 00:11:58.581 07:21:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.581 07:21:20 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:11:58.581 07:21:20 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:11:58.582 07:21:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:12:00.484 07:21:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:12:00.484 07:21:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:12:00.485 07:21:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.485 07:21:22 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:12:00.485 07:21:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.485 07:21:22 -- common/autotest_common.sh@1197 -- # return 0 00:12:00.485 07:21:22 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:00.485 [global] 00:12:00.485 thread=1 00:12:00.485 invalidate=1 00:12:00.485 rw=write 00:12:00.485 time_based=1 00:12:00.485 runtime=1 00:12:00.485 ioengine=libaio 00:12:00.485 direct=1 00:12:00.485 bs=4096 00:12:00.485 iodepth=1 00:12:00.485 norandommap=0 00:12:00.485 numjobs=1 00:12:00.485 00:12:00.485 verify_dump=1 00:12:00.485 verify_backlog=512 00:12:00.485 verify_state_save=0 00:12:00.485 do_verify=1 00:12:00.485 verify=crc32c-intel 00:12:00.485 [job0] 00:12:00.485 filename=/dev/nvme0n1 00:12:00.485 [job1] 00:12:00.485 filename=/dev/nvme0n2 00:12:00.485 [job2] 00:12:00.485 filename=/dev/nvme0n3 00:12:00.485 [job3] 00:12:00.485 filename=/dev/nvme0n4 00:12:00.485 Could not set queue depth (nvme0n1) 00:12:00.485 Could not set queue depth (nvme0n2) 00:12:00.485 Could not set queue depth (nvme0n3) 00:12:00.485 Could not set queue depth (nvme0n4) 00:12:00.743 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.743 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.743 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.743 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:00.743 fio-3.35 00:12:00.743 Starting 4 threads 00:12:02.119 00:12:02.119 job0: (groupid=0, jobs=1): err= 0: pid=76028: Thu Nov 28 07:21:24 2024 00:12:02.119 read: IOPS=2256, BW=9027KiB/s (9244kB/s)(9036KiB/1001msec) 00:12:02.119 slat (nsec): min=8672, max=58291, avg=15590.00, stdev=5121.50 00:12:02.119 clat (usec): min=129, max=377, avg=219.17, stdev=49.49 00:12:02.119 lat (usec): min=142, max=392, avg=234.76, stdev=48.43 00:12:02.119 clat percentiles (usec): 00:12:02.119 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 165], 00:12:02.119 | 30.00th=[ 178], 40.00th=[ 215], 50.00th=[ 225], 60.00th=[ 233], 00:12:02.119 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 293], 95.00th=[ 318], 00:12:02.119 | 99.00th=[ 343], 99.50th=[ 347], 99.90th=[ 359], 99.95th=[ 367], 00:12:02.119 | 99.99th=[ 379] 00:12:02.119 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:02.119 slat (usec): min=10, max=124, avg=21.41, stdev= 8.31 00:12:02.119 clat (usec): min=95, max=498, avg=158.76, stdev=36.56 00:12:02.119 lat (usec): min=119, max=521, avg=180.16, stdev=34.95 00:12:02.119 clat percentiles (usec): 00:12:02.119 | 1.00th=[ 105], 5.00th=[ 113], 10.00th=[ 118], 20.00th=[ 124], 00:12:02.119 | 30.00th=[ 129], 40.00th=[ 137], 50.00th=[ 153], 60.00th=[ 176], 00:12:02.119 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 217], 00:12:02.119 | 99.00th=[ 231], 99.50th=[ 239], 99.90th=[ 258], 99.95th=[ 306], 00:12:02.119 | 99.99th=[ 498] 00:12:02.119 bw ( KiB/s): min=12288, max=12288, per=33.37%, avg=12288.00, stdev= 0.00, samples=1 00:12:02.119 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:02.119 lat (usec) : 100=0.10%, 250=90.45%, 500=9.44% 00:12:02.119 cpu : usr=1.30%, sys=8.10%, ctx=4819, majf=0, minf=5 00:12:02.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:02.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.119 issued rwts: total=2259,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:02.119 job1: (groupid=0, jobs=1): err= 0: pid=76029: Thu Nov 28 07:21:24 2024 00:12:02.119 read: IOPS=1945, BW=7780KiB/s (7967kB/s)(7788KiB/1001msec) 00:12:02.119 slat (usec): min=12, max=108, avg=15.90, stdev= 3.67 00:12:02.119 clat (usec): min=158, max=3256, avg=260.78, stdev=87.02 00:12:02.119 lat (usec): min=175, max=3273, avg=276.68, stdev=87.71 00:12:02.119 clat percentiles (usec): 00:12:02.119 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:12:02.119 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:12:02.119 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 330], 95.00th=[ 371], 00:12:02.119 | 99.00th=[ 445], 99.50th=[ 506], 99.90th=[ 1287], 99.95th=[ 3261], 00:12:02.119 | 99.99th=[ 3261] 00:12:02.119 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:02.119 slat (nsec): min=16504, max=87040, avg=23129.85, stdev=4885.50 00:12:02.119 clat (usec): min=94, max=471, avg=198.51, stdev=46.63 00:12:02.119 lat (usec): min=117, max=498, avg=221.64, stdev=48.85 00:12:02.119 clat percentiles (usec): 00:12:02.119 | 1.00th=[ 105], 5.00th=[ 120], 10.00th=[ 165], 20.00th=[ 174], 00:12:02.119 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:12:02.119 | 70.00th=[ 206], 80.00th=[ 219], 90.00th=[ 245], 95.00th=[ 297], 00:12:02.119 | 99.00th=[ 367], 99.50th=[ 396], 99.90th=[ 437], 99.95th=[ 461], 00:12:02.119 | 99.99th=[ 474] 00:12:02.119 bw ( KiB/s): min= 8192, max= 8192, per=22.24%, avg=8192.00, stdev= 0.00, samples=1 00:12:02.119 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:02.119 lat (usec) : 100=0.18%, 250=74.22%, 500=25.36%, 750=0.18%, 1000=0.03% 00:12:02.119 lat (msec) : 2=0.03%, 4=0.03% 00:12:02.119 cpu : usr=1.70%, sys=6.10%, ctx=3996, majf=0, minf=16 00:12:02.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:02.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.119 issued rwts: total=1947,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:02.119 job2: (groupid=0, jobs=1): err= 0: pid=76030: Thu Nov 28 07:21:24 2024 00:12:02.119 read: IOPS=2132, BW=8531KiB/s (8736kB/s)(8540KiB/1001msec) 00:12:02.119 slat (nsec): min=8848, max=56867, avg=14323.14, stdev=4843.93 00:12:02.119 clat (usec): min=139, max=1834, avg=226.36, stdev=60.96 00:12:02.119 lat (usec): min=155, max=1855, avg=240.69, stdev=60.08 00:12:02.119 clat percentiles (usec): 00:12:02.119 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 172], 00:12:02.119 | 30.00th=[ 194], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 237], 00:12:02.119 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 306], 95.00th=[ 326], 00:12:02.119 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 388], 99.95th=[ 537], 00:12:02.119 | 99.99th=[ 1827] 00:12:02.119 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:02.119 slat (usec): min=12, max=113, avg=23.37, stdev= 7.76 00:12:02.119 clat (usec): min=75, max=3610, avg=163.27, stdev=108.26 00:12:02.119 lat (usec): min=124, max=3631, avg=186.65, stdev=108.48 00:12:02.119 clat percentiles (usec): 00:12:02.119 | 1.00th=[ 110], 5.00th=[ 116], 10.00th=[ 120], 20.00th=[ 127], 00:12:02.119 | 30.00th=[ 135], 40.00th=[ 141], 50.00th=[ 153], 60.00th=[ 172], 00:12:02.119 | 70.00th=[ 182], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 212], 00:12:02.119 | 99.00th=[ 229], 99.50th=[ 239], 99.90th=[ 2278], 99.95th=[ 2311], 00:12:02.119 | 99.99th=[ 3621] 00:12:02.119 bw ( KiB/s): min=12288, max=12288, per=33.37%, avg=12288.00, stdev= 0.00, samples=1 00:12:02.119 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:02.119 lat (usec) : 100=0.06%, 250=88.86%, 500=10.91%, 750=0.02% 00:12:02.119 lat (msec) : 2=0.06%, 4=0.09% 00:12:02.119 cpu : usr=2.80%, sys=6.70%, ctx=4704, majf=0, minf=5 00:12:02.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:02.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.120 issued rwts: total=2135,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:02.120 job3: (groupid=0, jobs=1): err= 0: pid=76031: Thu Nov 28 07:21:24 2024 00:12:02.120 read: IOPS=1981, BW=7924KiB/s (8114kB/s)(7932KiB/1001msec) 00:12:02.120 slat (nsec): min=11875, max=38636, avg=15701.79, stdev=3276.03 00:12:02.120 clat (usec): min=160, max=698, avg=261.72, stdev=55.06 00:12:02.120 lat (usec): min=180, max=717, avg=277.42, stdev=56.56 00:12:02.120 clat percentiles (usec): 00:12:02.120 | 1.00th=[ 192], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:12:02.120 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:12:02.120 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 338], 95.00th=[ 392], 00:12:02.120 | 99.00th=[ 469], 99.50th=[ 478], 99.90th=[ 676], 99.95th=[ 701], 00:12:02.120 | 99.99th=[ 701] 00:12:02.120 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:02.120 slat (usec): min=16, max=127, avg=22.05, stdev= 4.47 00:12:02.120 clat (usec): min=99, max=430, avg=194.29, stdev=34.84 00:12:02.120 lat (usec): min=116, max=514, avg=216.34, stdev=36.57 00:12:02.120 clat percentiles (usec): 00:12:02.120 | 1.00th=[ 110], 5.00th=[ 139], 10.00th=[ 165], 20.00th=[ 174], 00:12:02.120 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:12:02.120 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 251], 00:12:02.120 | 99.00th=[ 310], 99.50th=[ 363], 99.90th=[ 400], 99.95th=[ 433], 00:12:02.120 | 99.99th=[ 433] 00:12:02.120 bw ( KiB/s): min= 8192, max= 8192, per=22.24%, avg=8192.00, stdev= 0.00, samples=1 00:12:02.120 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:02.120 lat (usec) : 100=0.05%, 250=75.96%, 500=23.82%, 750=0.17% 00:12:02.120 cpu : usr=2.00%, sys=5.70%, ctx=4031, majf=0, minf=13 00:12:02.120 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:02.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.120 issued rwts: total=1983,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.120 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:02.120 00:12:02.120 Run status group 0 (all jobs): 00:12:02.120 READ: bw=32.5MiB/s (34.1MB/s), 7780KiB/s-9027KiB/s (7967kB/s-9244kB/s), io=32.5MiB (34.1MB), run=1001-1001msec 00:12:02.120 WRITE: bw=36.0MiB/s (37.7MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=36.0MiB (37.7MB), run=1001-1001msec 00:12:02.120 00:12:02.120 Disk stats (read/write): 00:12:02.120 nvme0n1: ios=2098/2118, merge=0/0, ticks=469/312, in_queue=781, util=86.47% 00:12:02.120 nvme0n2: ios=1564/1878, merge=0/0, ticks=423/391, in_queue=814, util=87.49% 00:12:02.120 nvme0n3: ios=1970/2048, merge=0/0, ticks=432/319, in_queue=751, util=88.19% 00:12:02.120 nvme0n4: ios=1536/1934, merge=0/0, ticks=406/394, in_queue=800, util=89.59% 00:12:02.120 07:21:24 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:02.120 [global] 00:12:02.120 thread=1 00:12:02.120 invalidate=1 00:12:02.120 rw=randwrite 00:12:02.120 time_based=1 00:12:02.120 runtime=1 00:12:02.120 ioengine=libaio 00:12:02.120 direct=1 00:12:02.120 bs=4096 00:12:02.120 iodepth=1 00:12:02.120 norandommap=0 00:12:02.120 numjobs=1 00:12:02.120 00:12:02.120 verify_dump=1 00:12:02.120 verify_backlog=512 00:12:02.120 verify_state_save=0 00:12:02.120 do_verify=1 00:12:02.120 verify=crc32c-intel 00:12:02.120 [job0] 00:12:02.120 filename=/dev/nvme0n1 00:12:02.120 [job1] 00:12:02.120 filename=/dev/nvme0n2 00:12:02.120 [job2] 00:12:02.120 filename=/dev/nvme0n3 00:12:02.120 [job3] 00:12:02.120 filename=/dev/nvme0n4 00:12:02.120 Could not set queue depth (nvme0n1) 00:12:02.120 Could not set queue depth (nvme0n2) 00:12:02.120 Could not set queue depth (nvme0n3) 00:12:02.120 Could not set queue depth (nvme0n4) 00:12:02.120 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.120 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.120 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.120 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:02.120 fio-3.35 00:12:02.120 Starting 4 threads 00:12:03.552 00:12:03.552 job0: (groupid=0, jobs=1): err= 0: pid=76084: Thu Nov 28 07:21:25 2024 00:12:03.552 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:03.552 slat (nsec): min=11738, max=54888, avg=18082.67, stdev=8176.16 00:12:03.552 clat (usec): min=148, max=4280, avg=318.25, stdev=151.05 00:12:03.552 lat (usec): min=175, max=4305, avg=336.33, stdev=155.70 00:12:03.552 clat percentiles (usec): 00:12:03.552 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:12:03.552 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 260], 60.00th=[ 314], 00:12:03.552 | 70.00th=[ 338], 80.00th=[ 400], 90.00th=[ 498], 95.00th=[ 570], 00:12:03.552 | 99.00th=[ 635], 99.50th=[ 693], 99.90th=[ 816], 99.95th=[ 4293], 00:12:03.552 | 99.99th=[ 4293] 00:12:03.552 write: IOPS=1630, BW=6521KiB/s (6678kB/s)(6528KiB/1001msec); 0 zone resets 00:12:03.552 slat (usec): min=16, max=112, avg=30.76, stdev=12.16 00:12:03.552 clat (usec): min=97, max=653, avg=260.64, stdev=102.78 00:12:03.552 lat (usec): min=115, max=697, avg=291.40, stdev=111.87 00:12:03.552 clat percentiles (usec): 00:12:03.552 | 1.00th=[ 105], 5.00th=[ 115], 10.00th=[ 126], 20.00th=[ 176], 00:12:03.552 | 30.00th=[ 190], 40.00th=[ 206], 50.00th=[ 260], 60.00th=[ 277], 00:12:03.552 | 70.00th=[ 306], 80.00th=[ 359], 90.00th=[ 412], 95.00th=[ 445], 00:12:03.552 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 611], 99.95th=[ 652], 00:12:03.552 | 99.99th=[ 652] 00:12:03.552 bw ( KiB/s): min= 8192, max= 8192, per=24.74%, avg=8192.00, stdev= 0.00, samples=1 00:12:03.552 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:03.552 lat (usec) : 100=0.16%, 250=45.52%, 500=48.83%, 750=5.37%, 1000=0.09% 00:12:03.552 lat (msec) : 10=0.03% 00:12:03.552 cpu : usr=1.50%, sys=6.70%, ctx=3169, majf=0, minf=13 00:12:03.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.552 issued rwts: total=1536,1632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.552 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.552 job1: (groupid=0, jobs=1): err= 0: pid=76085: Thu Nov 28 07:21:25 2024 00:12:03.552 read: IOPS=1647, BW=6589KiB/s (6748kB/s)(6596KiB/1001msec) 00:12:03.552 slat (nsec): min=8574, max=98517, avg=12690.45, stdev=4133.61 00:12:03.552 clat (usec): min=213, max=3078, avg=303.74, stdev=97.74 00:12:03.552 lat (usec): min=228, max=3099, avg=316.43, stdev=98.28 00:12:03.552 clat percentiles (usec): 00:12:03.552 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 265], 00:12:03.552 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:12:03.552 | 70.00th=[ 310], 80.00th=[ 334], 90.00th=[ 375], 95.00th=[ 412], 00:12:03.552 | 99.00th=[ 510], 99.50th=[ 586], 99.90th=[ 1762], 99.95th=[ 3064], 00:12:03.552 | 99.99th=[ 3064] 00:12:03.552 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:03.552 slat (nsec): min=10762, max=62637, avg=18682.36, stdev=5401.06 00:12:03.552 clat (usec): min=116, max=755, avg=212.05, stdev=32.61 00:12:03.552 lat (usec): min=132, max=774, avg=230.73, stdev=33.65 00:12:03.552 clat percentiles (usec): 00:12:03.552 | 1.00th=[ 147], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 188], 00:12:03.552 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:12:03.552 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 260], 00:12:03.552 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 367], 99.95th=[ 490], 00:12:03.552 | 99.99th=[ 758] 00:12:03.552 bw ( KiB/s): min= 8208, max= 8208, per=24.78%, avg=8208.00, stdev= 0.00, samples=1 00:12:03.552 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:12:03.552 lat (usec) : 250=53.72%, 500=45.77%, 750=0.35%, 1000=0.05% 00:12:03.552 lat (msec) : 2=0.08%, 4=0.03% 00:12:03.552 cpu : usr=1.20%, sys=5.10%, ctx=3699, majf=0, minf=13 00:12:03.552 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.553 issued rwts: total=1649,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.553 job2: (groupid=0, jobs=1): err= 0: pid=76092: Thu Nov 28 07:21:25 2024 00:12:03.553 read: IOPS=1785, BW=7141KiB/s (7312kB/s)(7148KiB/1001msec) 00:12:03.553 slat (nsec): min=8610, max=45537, avg=14810.98, stdev=4399.01 00:12:03.553 clat (usec): min=190, max=673, avg=297.16, stdev=74.97 00:12:03.553 lat (usec): min=217, max=702, avg=311.97, stdev=76.82 00:12:03.553 clat percentiles (usec): 00:12:03.553 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 233], 00:12:03.553 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 277], 60.00th=[ 302], 00:12:03.553 | 70.00th=[ 326], 80.00th=[ 363], 90.00th=[ 404], 95.00th=[ 441], 00:12:03.553 | 99.00th=[ 529], 99.50th=[ 586], 99.90th=[ 676], 99.95th=[ 676], 00:12:03.553 | 99.99th=[ 676] 00:12:03.553 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:03.553 slat (usec): min=10, max=119, avg=18.49, stdev= 5.31 00:12:03.553 clat (usec): min=94, max=687, avg=194.43, stdev=49.77 00:12:03.553 lat (usec): min=114, max=701, avg=212.92, stdev=50.18 00:12:03.553 clat percentiles (usec): 00:12:03.553 | 1.00th=[ 105], 5.00th=[ 117], 10.00th=[ 128], 20.00th=[ 161], 00:12:03.553 | 30.00th=[ 176], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 200], 00:12:03.553 | 70.00th=[ 208], 80.00th=[ 227], 90.00th=[ 255], 95.00th=[ 277], 00:12:03.553 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 553], 99.95th=[ 619], 00:12:03.553 | 99.99th=[ 685] 00:12:03.553 bw ( KiB/s): min= 8192, max= 8192, per=24.74%, avg=8192.00, stdev= 0.00, samples=1 00:12:03.553 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:03.553 lat (usec) : 100=0.16%, 250=65.42%, 500=33.77%, 750=0.65% 00:12:03.553 cpu : usr=2.00%, sys=4.70%, ctx=3837, majf=0, minf=11 00:12:03.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.553 issued rwts: total=1787,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.553 job3: (groupid=0, jobs=1): err= 0: pid=76093: Thu Nov 28 07:21:25 2024 00:12:03.553 read: IOPS=2099, BW=8400KiB/s (8601kB/s)(8408KiB/1001msec) 00:12:03.553 slat (nsec): min=12139, max=72330, avg=16503.40, stdev=5336.03 00:12:03.553 clat (usec): min=156, max=3071, avg=225.51, stdev=86.17 00:12:03.553 lat (usec): min=169, max=3106, avg=242.02, stdev=87.78 00:12:03.553 clat percentiles (usec): 00:12:03.553 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:12:03.553 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 204], 60.00th=[ 249], 00:12:03.553 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:12:03.553 | 99.00th=[ 314], 99.50th=[ 469], 99.90th=[ 1045], 99.95th=[ 1074], 00:12:03.553 | 99.99th=[ 3064] 00:12:03.553 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:03.553 slat (usec): min=13, max=100, avg=21.71, stdev= 6.11 00:12:03.553 clat (usec): min=106, max=332, avg=166.91, stdev=41.73 00:12:03.553 lat (usec): min=126, max=433, avg=188.62, stdev=44.82 00:12:03.553 clat percentiles (usec): 00:12:03.553 | 1.00th=[ 112], 5.00th=[ 117], 10.00th=[ 122], 20.00th=[ 128], 00:12:03.553 | 30.00th=[ 135], 40.00th=[ 141], 50.00th=[ 149], 60.00th=[ 178], 00:12:03.553 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 233], 00:12:03.553 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 302], 99.95th=[ 306], 00:12:03.553 | 99.99th=[ 334] 00:12:03.553 bw ( KiB/s): min= 8192, max= 8192, per=24.74%, avg=8192.00, stdev= 0.00, samples=1 00:12:03.553 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:03.553 lat (usec) : 250=82.39%, 500=17.40%, 750=0.06%, 1000=0.09% 00:12:03.553 lat (msec) : 2=0.04%, 4=0.02% 00:12:03.553 cpu : usr=2.00%, sys=7.30%, ctx=4663, majf=0, minf=10 00:12:03.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.553 issued rwts: total=2102,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:03.553 00:12:03.553 Run status group 0 (all jobs): 00:12:03.553 READ: bw=27.6MiB/s (28.9MB/s), 6138KiB/s-8400KiB/s (6285kB/s-8601kB/s), io=27.6MiB (29.0MB), run=1001-1001msec 00:12:03.553 WRITE: bw=32.3MiB/s (33.9MB/s), 6521KiB/s-9.99MiB/s (6678kB/s-10.5MB/s), io=32.4MiB (33.9MB), run=1001-1001msec 00:12:03.553 00:12:03.553 Disk stats (read/write): 00:12:03.553 nvme0n1: ios=1365/1536, merge=0/0, ticks=387/420, in_queue=807, util=87.07% 00:12:03.553 nvme0n2: ios=1566/1568, merge=0/0, ticks=452/316, in_queue=768, util=88.13% 00:12:03.553 nvme0n3: ios=1536/1803, merge=0/0, ticks=447/348, in_queue=795, util=89.09% 00:12:03.553 nvme0n4: ios=1785/2048, merge=0/0, ticks=418/372, in_queue=790, util=89.64% 00:12:03.553 07:21:25 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:03.553 [global] 00:12:03.553 thread=1 00:12:03.553 invalidate=1 00:12:03.553 rw=write 00:12:03.553 time_based=1 00:12:03.553 runtime=1 00:12:03.553 ioengine=libaio 00:12:03.553 direct=1 00:12:03.553 bs=4096 00:12:03.553 iodepth=128 00:12:03.553 norandommap=0 00:12:03.553 numjobs=1 00:12:03.553 00:12:03.553 verify_dump=1 00:12:03.553 verify_backlog=512 00:12:03.553 verify_state_save=0 00:12:03.553 do_verify=1 00:12:03.553 verify=crc32c-intel 00:12:03.553 [job0] 00:12:03.553 filename=/dev/nvme0n1 00:12:03.553 [job1] 00:12:03.553 filename=/dev/nvme0n2 00:12:03.553 [job2] 00:12:03.553 filename=/dev/nvme0n3 00:12:03.553 [job3] 00:12:03.553 filename=/dev/nvme0n4 00:12:03.553 Could not set queue depth (nvme0n1) 00:12:03.553 Could not set queue depth (nvme0n2) 00:12:03.553 Could not set queue depth (nvme0n3) 00:12:03.553 Could not set queue depth (nvme0n4) 00:12:03.553 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:03.553 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:03.553 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:03.553 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:03.553 fio-3.35 00:12:03.553 Starting 4 threads 00:12:04.931 00:12:04.931 job0: (groupid=0, jobs=1): err= 0: pid=76153: Thu Nov 28 07:21:26 2024 00:12:04.931 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:12:04.931 slat (usec): min=9, max=6425, avg=174.35, stdev=784.33 00:12:04.931 clat (usec): min=13682, max=32197, avg=22866.52, stdev=3236.90 00:12:04.931 lat (usec): min=13796, max=32210, avg=23040.86, stdev=3181.35 00:12:04.931 clat percentiles (usec): 00:12:04.931 | 1.00th=[15533], 5.00th=[17433], 10.00th=[18220], 20.00th=[19530], 00:12:04.931 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:12:04.931 | 70.00th=[23987], 80.00th=[24773], 90.00th=[26608], 95.00th=[28443], 00:12:04.931 | 99.00th=[31065], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:12:04.931 | 99.99th=[32113] 00:12:04.931 write: IOPS=2957, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1004msec); 0 zone resets 00:12:04.931 slat (usec): min=10, max=7538, avg=179.41, stdev=843.30 00:12:04.931 clat (usec): min=3081, max=29962, avg=22740.48, stdev=3104.45 00:12:04.931 lat (usec): min=5009, max=29987, avg=22919.89, stdev=3025.31 00:12:04.931 clat percentiles (usec): 00:12:04.931 | 1.00th=[ 7570], 5.00th=[17433], 10.00th=[19268], 20.00th=[20841], 00:12:04.931 | 30.00th=[22152], 40.00th=[23200], 50.00th=[23462], 60.00th=[23987], 00:12:04.931 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25297], 95.00th=[25822], 00:12:04.931 | 99.00th=[27657], 99.50th=[28181], 99.90th=[30016], 99.95th=[30016], 00:12:04.931 | 99.99th=[30016] 00:12:04.931 bw ( KiB/s): min=10608, max=12144, per=17.23%, avg=11376.00, stdev=1086.12, samples=2 00:12:04.931 iops : min= 2652, max= 3036, avg=2844.00, stdev=271.53, samples=2 00:12:04.931 lat (msec) : 4=0.02%, 10=0.58%, 20=17.11%, 50=82.29% 00:12:04.931 cpu : usr=1.79%, sys=8.57%, ctx=489, majf=0, minf=11 00:12:04.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:04.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:04.932 issued rwts: total=2560,2969,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:04.932 job1: (groupid=0, jobs=1): err= 0: pid=76154: Thu Nov 28 07:21:26 2024 00:12:04.932 read: IOPS=5279, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1004msec) 00:12:04.932 slat (usec): min=4, max=6426, avg=88.75, stdev=401.74 00:12:04.932 clat (usec): min=392, max=24734, avg=11607.93, stdev=2750.18 00:12:04.932 lat (usec): min=3824, max=24753, avg=11696.68, stdev=2771.28 00:12:04.932 clat percentiles (usec): 00:12:04.932 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:12:04.932 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[11207], 00:12:04.932 | 70.00th=[11731], 80.00th=[12256], 90.00th=[13566], 95.00th=[19268], 00:12:04.932 | 99.00th=[22414], 99.50th=[22676], 99.90th=[23987], 99.95th=[23987], 00:12:04.932 | 99.99th=[24773] 00:12:04.932 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:12:04.932 slat (usec): min=10, max=6074, avg=87.04, stdev=391.97 00:12:04.932 clat (usec): min=8106, max=28062, avg=11607.34, stdev=2607.50 00:12:04.932 lat (usec): min=8126, max=28080, avg=11694.37, stdev=2640.31 00:12:04.932 clat percentiles (usec): 00:12:04.932 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:12:04.932 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:12:04.932 | 70.00th=[11338], 80.00th=[11863], 90.00th=[13566], 95.00th=[15926], 00:12:04.932 | 99.00th=[25822], 99.50th=[27395], 99.90th=[27919], 99.95th=[28181], 00:12:04.932 | 99.99th=[28181] 00:12:04.932 bw ( KiB/s): min=20480, max=24576, per=34.11%, avg=22528.00, stdev=2896.31, samples=2 00:12:04.932 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:12:04.932 lat (usec) : 500=0.01% 00:12:04.932 lat (msec) : 4=0.05%, 10=13.04%, 20=83.73%, 50=3.17% 00:12:04.932 cpu : usr=4.89%, sys=14.46%, ctx=530, majf=0, minf=9 00:12:04.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:04.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:04.932 issued rwts: total=5301,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:04.932 job2: (groupid=0, jobs=1): err= 0: pid=76155: Thu Nov 28 07:21:26 2024 00:12:04.932 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:12:04.932 slat (usec): min=3, max=7363, avg=175.80, stdev=803.89 00:12:04.932 clat (usec): min=14700, max=29226, avg=22685.23, stdev=2177.99 00:12:04.932 lat (usec): min=15992, max=29242, avg=22861.03, stdev=2080.79 00:12:04.932 clat percentiles (usec): 00:12:04.932 | 1.00th=[17171], 5.00th=[18482], 10.00th=[19268], 20.00th=[20841], 00:12:04.932 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:12:04.932 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[25297], 00:12:04.932 | 99.00th=[28181], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:12:04.932 | 99.99th=[29230] 00:12:04.932 write: IOPS=3009, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1005msec); 0 zone resets 00:12:04.932 slat (usec): min=4, max=7006, avg=174.21, stdev=825.66 00:12:04.932 clat (usec): min=4340, max=29462, avg=22623.94, stdev=2967.97 00:12:04.932 lat (usec): min=4810, max=29488, avg=22798.15, stdev=2897.75 00:12:04.932 clat percentiles (usec): 00:12:04.932 | 1.00th=[ 9503], 5.00th=[17171], 10.00th=[19268], 20.00th=[20841], 00:12:04.932 | 30.00th=[21890], 40.00th=[22938], 50.00th=[23462], 60.00th=[23725], 00:12:04.932 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25297], 95.00th=[25560], 00:12:04.932 | 99.00th=[27657], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:12:04.932 | 99.99th=[29492] 00:12:04.932 bw ( KiB/s): min=10896, max=12288, per=17.55%, avg=11592.00, stdev=984.29, samples=2 00:12:04.932 iops : min= 2724, max= 3072, avg=2898.00, stdev=246.07, samples=2 00:12:04.932 lat (msec) : 10=0.59%, 20=14.41%, 50=85.00% 00:12:04.932 cpu : usr=2.69%, sys=7.67%, ctx=522, majf=0, minf=15 00:12:04.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:04.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:04.932 issued rwts: total=2560,3025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:04.932 job3: (groupid=0, jobs=1): err= 0: pid=76156: Thu Nov 28 07:21:26 2024 00:12:04.932 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:12:04.932 slat (usec): min=3, max=9771, avg=101.46, stdev=484.36 00:12:04.932 clat (usec): min=9149, max=26075, avg=13577.43, stdev=2961.41 00:12:04.932 lat (usec): min=11481, max=26094, avg=13678.88, stdev=2947.24 00:12:04.932 clat percentiles (usec): 00:12:04.932 | 1.00th=[ 9765], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:12:04.932 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:12:04.932 | 70.00th=[12911], 80.00th=[13173], 90.00th=[19530], 95.00th=[20841], 00:12:04.932 | 99.00th=[23200], 99.50th=[23987], 99.90th=[26084], 99.95th=[26084], 00:12:04.932 | 99.99th=[26084] 00:12:04.932 write: IOPS=4946, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1004msec); 0 zone resets 00:12:04.932 slat (usec): min=10, max=5190, avg=99.74, stdev=431.91 00:12:04.932 clat (usec): min=3758, max=22209, avg=12947.84, stdev=1546.75 00:12:04.932 lat (usec): min=5789, max=22230, avg=13047.58, stdev=1507.01 00:12:04.932 clat percentiles (usec): 00:12:04.932 | 1.00th=[ 9634], 5.00th=[11600], 10.00th=[11994], 20.00th=[12256], 00:12:04.932 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[12780], 00:12:04.932 | 70.00th=[12911], 80.00th=[13173], 90.00th=[14877], 95.00th=[15926], 00:12:04.932 | 99.00th=[19268], 99.50th=[20579], 99.90th=[22152], 99.95th=[22152], 00:12:04.932 | 99.99th=[22152] 00:12:04.932 bw ( KiB/s): min=18232, max=20439, per=29.28%, avg=19335.50, stdev=1560.58, samples=2 00:12:04.932 iops : min= 4558, max= 5109, avg=4833.50, stdev=389.62, samples=2 00:12:04.932 lat (msec) : 4=0.01%, 10=1.42%, 20=94.02%, 50=4.55% 00:12:04.932 cpu : usr=3.79%, sys=13.56%, ctx=420, majf=0, minf=10 00:12:04.932 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:04.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:04.932 issued rwts: total=4608,4966,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.932 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:04.932 00:12:04.932 Run status group 0 (all jobs): 00:12:04.932 READ: bw=58.4MiB/s (61.3MB/s), 9.95MiB/s-20.6MiB/s (10.4MB/s-21.6MB/s), io=58.7MiB (61.6MB), run=1004-1005msec 00:12:04.932 WRITE: bw=64.5MiB/s (67.6MB/s), 11.6MiB/s-21.9MiB/s (12.1MB/s-23.0MB/s), io=64.8MiB (68.0MB), run=1004-1005msec 00:12:04.932 00:12:04.932 Disk stats (read/write): 00:12:04.932 nvme0n1: ios=2098/2533, merge=0/0, ticks=11459/12719, in_queue=24178, util=87.88% 00:12:04.932 nvme0n2: ios=4911/5120, merge=0/0, ticks=16632/15605, in_queue=32237, util=88.56% 00:12:04.932 nvme0n3: ios=2132/2560, merge=0/0, ticks=11593/12745, in_queue=24338, util=89.15% 00:12:04.932 nvme0n4: ios=4096/4525, merge=0/0, ticks=11769/12165, in_queue=23934, util=89.19% 00:12:04.932 07:21:26 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:04.932 [global] 00:12:04.932 thread=1 00:12:04.932 invalidate=1 00:12:04.932 rw=randwrite 00:12:04.932 time_based=1 00:12:04.932 runtime=1 00:12:04.932 ioengine=libaio 00:12:04.932 direct=1 00:12:04.932 bs=4096 00:12:04.932 iodepth=128 00:12:04.932 norandommap=0 00:12:04.932 numjobs=1 00:12:04.932 00:12:04.932 verify_dump=1 00:12:04.932 verify_backlog=512 00:12:04.932 verify_state_save=0 00:12:04.932 do_verify=1 00:12:04.932 verify=crc32c-intel 00:12:04.932 [job0] 00:12:04.932 filename=/dev/nvme0n1 00:12:04.932 [job1] 00:12:04.932 filename=/dev/nvme0n2 00:12:04.932 [job2] 00:12:04.933 filename=/dev/nvme0n3 00:12:04.933 [job3] 00:12:04.933 filename=/dev/nvme0n4 00:12:04.933 Could not set queue depth (nvme0n1) 00:12:04.933 Could not set queue depth (nvme0n2) 00:12:04.933 Could not set queue depth (nvme0n3) 00:12:04.933 Could not set queue depth (nvme0n4) 00:12:04.933 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:04.933 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:04.933 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:04.933 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:04.933 fio-3.35 00:12:04.933 Starting 4 threads 00:12:06.309 00:12:06.309 job0: (groupid=0, jobs=1): err= 0: pid=76209: Thu Nov 28 07:21:28 2024 00:12:06.309 read: IOPS=5687, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1002msec) 00:12:06.309 slat (usec): min=5, max=8898, avg=80.47, stdev=480.88 00:12:06.309 clat (usec): min=1372, max=21279, avg=10992.76, stdev=1917.45 00:12:06.309 lat (usec): min=1380, max=21604, avg=11073.23, stdev=1926.90 00:12:06.309 clat percentiles (usec): 00:12:06.309 | 1.00th=[ 6063], 5.00th=[ 7767], 10.00th=[ 9634], 20.00th=[10290], 00:12:06.309 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:12:06.309 | 70.00th=[11207], 80.00th=[11338], 90.00th=[12256], 95.00th=[13435], 00:12:06.309 | 99.00th=[19006], 99.50th=[20055], 99.90th=[20579], 99.95th=[21365], 00:12:06.309 | 99.99th=[21365] 00:12:06.309 write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:12:06.309 slat (usec): min=5, max=7683, avg=80.72, stdev=452.23 00:12:06.309 clat (usec): min=3357, max=21203, avg=10469.30, stdev=1261.67 00:12:06.309 lat (usec): min=3412, max=21212, avg=10550.03, stdev=1202.57 00:12:06.309 clat percentiles (usec): 00:12:06.309 | 1.00th=[ 4752], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[ 9896], 00:12:06.309 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:12:06.309 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11469], 95.00th=[12125], 00:12:06.309 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15270], 99.95th=[15270], 00:12:06.309 | 99.99th=[21103] 00:12:06.309 bw ( KiB/s): min=24526, max=24526, per=35.58%, avg=24526.00, stdev= 0.00, samples=1 00:12:06.309 iops : min= 6131, max= 6131, avg=6131.00, stdev= 0.00, samples=1 00:12:06.309 lat (msec) : 2=0.05%, 4=0.25%, 10=16.30%, 20=83.17%, 50=0.22% 00:12:06.309 cpu : usr=5.39%, sys=15.08%, ctx=338, majf=0, minf=11 00:12:06.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:06.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.309 issued rwts: total=5699,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.309 job1: (groupid=0, jobs=1): err= 0: pid=76210: Thu Nov 28 07:21:28 2024 00:12:06.309 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:12:06.309 slat (usec): min=4, max=12850, avg=192.85, stdev=841.15 00:12:06.309 clat (usec): min=11815, max=36139, avg=24078.59, stdev=4216.08 00:12:06.309 lat (usec): min=12426, max=36161, avg=24271.44, stdev=4235.23 00:12:06.309 clat percentiles (usec): 00:12:06.309 | 1.00th=[14484], 5.00th=[16581], 10.00th=[19268], 20.00th=[21103], 00:12:06.309 | 30.00th=[21890], 40.00th=[22414], 50.00th=[23462], 60.00th=[23987], 00:12:06.309 | 70.00th=[26084], 80.00th=[27919], 90.00th=[30016], 95.00th=[31589], 00:12:06.309 | 99.00th=[33817], 99.50th=[34866], 99.90th=[35390], 99.95th=[35914], 00:12:06.309 | 99.99th=[35914] 00:12:06.309 write: IOPS=2976, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1004msec); 0 zone resets 00:12:06.309 slat (usec): min=5, max=7919, avg=161.94, stdev=696.07 00:12:06.309 clat (usec): min=2856, max=37787, avg=21929.43, stdev=4642.81 00:12:06.309 lat (usec): min=4502, max=38466, avg=22091.37, stdev=4654.12 00:12:06.309 clat percentiles (usec): 00:12:06.309 | 1.00th=[ 7242], 5.00th=[15008], 10.00th=[16057], 20.00th=[18482], 00:12:06.309 | 30.00th=[19530], 40.00th=[21365], 50.00th=[22676], 60.00th=[23200], 00:12:06.309 | 70.00th=[24249], 80.00th=[25297], 90.00th=[26870], 95.00th=[29492], 00:12:06.309 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:12:06.309 | 99.99th=[38011] 00:12:06.309 bw ( KiB/s): min=10600, max=12288, per=16.60%, avg=11444.00, stdev=1193.60, samples=2 00:12:06.309 iops : min= 2650, max= 3072, avg=2861.00, stdev=298.40, samples=2 00:12:06.309 lat (msec) : 4=0.02%, 10=0.96%, 20=21.41%, 50=77.61% 00:12:06.309 cpu : usr=2.89%, sys=7.58%, ctx=705, majf=0, minf=5 00:12:06.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:06.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.309 issued rwts: total=2560,2988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.309 job2: (groupid=0, jobs=1): err= 0: pid=76211: Thu Nov 28 07:21:28 2024 00:12:06.309 read: IOPS=4041, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1004msec) 00:12:06.309 slat (usec): min=4, max=12073, avg=126.23, stdev=614.28 00:12:06.309 clat (usec): min=2917, max=38140, avg=16404.58, stdev=6517.31 00:12:06.309 lat (usec): min=4793, max=38171, avg=16530.81, stdev=6556.48 00:12:06.309 clat percentiles (usec): 00:12:06.309 | 1.00th=[ 9503], 5.00th=[11600], 10.00th=[11863], 20.00th=[11994], 00:12:06.309 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12911], 00:12:06.309 | 70.00th=[19006], 80.00th=[23987], 90.00th=[26870], 95.00th=[29230], 00:12:06.309 | 99.00th=[33817], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:12:06.309 | 99.99th=[38011] 00:12:06.309 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:12:06.309 slat (usec): min=10, max=6604, avg=111.71, stdev=473.93 00:12:06.309 clat (usec): min=9287, max=28204, avg=14699.27, stdev=4091.43 00:12:06.309 lat (usec): min=10682, max=28266, avg=14810.98, stdev=4098.83 00:12:06.309 clat percentiles (usec): 00:12:06.309 | 1.00th=[10028], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:12:06.309 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:12:06.309 | 70.00th=[13960], 80.00th=[18482], 90.00th=[21890], 95.00th=[23725], 00:12:06.309 | 99.00th=[26346], 99.50th=[27395], 99.90th=[28181], 99.95th=[28181], 00:12:06.309 | 99.99th=[28181] 00:12:06.309 bw ( KiB/s): min=12288, max=20480, per=23.77%, avg=16384.00, stdev=5792.62, samples=2 00:12:06.309 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:12:06.309 lat (msec) : 4=0.01%, 10=1.39%, 20=76.71%, 50=21.89% 00:12:06.309 cpu : usr=3.49%, sys=10.97%, ctx=536, majf=0, minf=15 00:12:06.309 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:06.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.309 issued rwts: total=4058,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.309 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.309 job3: (groupid=0, jobs=1): err= 0: pid=76212: Thu Nov 28 07:21:28 2024 00:12:06.310 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:12:06.310 slat (usec): min=4, max=10378, avg=124.01, stdev=605.81 00:12:06.310 clat (usec): min=8715, max=32438, avg=16462.74, stdev=5256.03 00:12:06.310 lat (usec): min=8784, max=33433, avg=16586.74, stdev=5296.65 00:12:06.310 clat percentiles (usec): 00:12:06.310 | 1.00th=[10159], 5.00th=[11338], 10.00th=[11731], 20.00th=[12125], 00:12:06.310 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13173], 60.00th=[16581], 00:12:06.310 | 70.00th=[21103], 80.00th=[22152], 90.00th=[23200], 95.00th=[25035], 00:12:06.310 | 99.00th=[30278], 99.50th=[31065], 99.90th=[32375], 99.95th=[32375], 00:12:06.310 | 99.99th=[32375] 00:12:06.310 write: IOPS=4062, BW=15.9MiB/s (16.6MB/s)(15.9MiB/1003msec); 0 zone resets 00:12:06.310 slat (usec): min=5, max=9162, avg=127.90, stdev=644.56 00:12:06.310 clat (usec): min=223, max=35327, avg=16331.09, stdev=6183.43 00:12:06.310 lat (usec): min=4673, max=35342, avg=16458.99, stdev=6212.28 00:12:06.310 clat percentiles (usec): 00:12:06.310 | 1.00th=[ 5604], 5.00th=[ 9110], 10.00th=[10683], 20.00th=[11469], 00:12:06.310 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12518], 60.00th=[17957], 00:12:06.310 | 70.00th=[22152], 80.00th=[23200], 90.00th=[25035], 95.00th=[25560], 00:12:06.310 | 99.00th=[30802], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:12:06.310 | 99.99th=[35390] 00:12:06.310 bw ( KiB/s): min=12288, max=19288, per=22.90%, avg=15788.00, stdev=4949.75, samples=2 00:12:06.310 iops : min= 3072, max= 4822, avg=3947.00, stdev=1237.44, samples=2 00:12:06.310 lat (usec) : 250=0.01% 00:12:06.310 lat (msec) : 10=3.47%, 20=62.48%, 50=34.04% 00:12:06.310 cpu : usr=3.39%, sys=10.78%, ctx=506, majf=0, minf=15 00:12:06.310 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:06.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:06.310 issued rwts: total=3584,4075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.310 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:06.310 00:12:06.310 Run status group 0 (all jobs): 00:12:06.310 READ: bw=61.9MiB/s (64.9MB/s), 9.96MiB/s-22.2MiB/s (10.4MB/s-23.3MB/s), io=62.1MiB (65.1MB), run=1002-1004msec 00:12:06.310 WRITE: bw=67.3MiB/s (70.6MB/s), 11.6MiB/s-24.0MiB/s (12.2MB/s-25.1MB/s), io=67.6MiB (70.9MB), run=1002-1004msec 00:12:06.310 00:12:06.310 Disk stats (read/write): 00:12:06.310 nvme0n1: ios=4910/5120, merge=0/0, ticks=50208/49138, in_queue=99346, util=86.96% 00:12:06.310 nvme0n2: ios=2185/2560, merge=0/0, ticks=22255/23315, in_queue=45570, util=86.69% 00:12:06.310 nvme0n3: ios=3584/3705, merge=0/0, ticks=14259/11926, in_queue=26185, util=88.00% 00:12:06.310 nvme0n4: ios=2840/3072, merge=0/0, ticks=31666/32771, in_queue=64437, util=87.37% 00:12:06.310 07:21:28 -- target/fio.sh@55 -- # sync 00:12:06.310 07:21:28 -- target/fio.sh@59 -- # fio_pid=76226 00:12:06.310 07:21:28 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:06.310 07:21:28 -- target/fio.sh@61 -- # sleep 3 00:12:06.310 [global] 00:12:06.310 thread=1 00:12:06.310 invalidate=1 00:12:06.310 rw=read 00:12:06.310 time_based=1 00:12:06.310 runtime=10 00:12:06.310 ioengine=libaio 00:12:06.310 direct=1 00:12:06.310 bs=4096 00:12:06.310 iodepth=1 00:12:06.310 norandommap=1 00:12:06.310 numjobs=1 00:12:06.310 00:12:06.310 [job0] 00:12:06.310 filename=/dev/nvme0n1 00:12:06.310 [job1] 00:12:06.310 filename=/dev/nvme0n2 00:12:06.310 [job2] 00:12:06.310 filename=/dev/nvme0n3 00:12:06.310 [job3] 00:12:06.310 filename=/dev/nvme0n4 00:12:06.310 Could not set queue depth (nvme0n1) 00:12:06.310 Could not set queue depth (nvme0n2) 00:12:06.310 Could not set queue depth (nvme0n3) 00:12:06.310 Could not set queue depth (nvme0n4) 00:12:06.310 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.310 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.310 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.310 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.310 fio-3.35 00:12:06.310 Starting 4 threads 00:12:09.596 07:21:31 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:09.596 fio: pid=76275, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:09.597 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=66699264, buflen=4096 00:12:09.597 07:21:31 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:09.597 fio: pid=76274, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:09.597 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=47714304, buflen=4096 00:12:09.597 07:21:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:09.597 07:21:31 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:09.855 fio: pid=76272, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:09.855 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=15466496, buflen=4096 00:12:10.115 07:21:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:10.115 07:21:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:10.374 fio: pid=76273, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:10.374 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=62091264, buflen=4096 00:12:10.374 00:12:10.374 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76272: Thu Nov 28 07:21:32 2024 00:12:10.374 read: IOPS=5639, BW=22.0MiB/s (23.1MB/s)(78.8MiB/3575msec) 00:12:10.374 slat (usec): min=10, max=11885, avg=15.68, stdev=140.94 00:12:10.374 clat (usec): min=59, max=3234, avg=160.34, stdev=44.65 00:12:10.374 lat (usec): min=131, max=12047, avg=176.03, stdev=148.22 00:12:10.374 clat percentiles (usec): 00:12:10.374 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 143], 00:12:10.374 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:12:10.374 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 221], 00:12:10.374 | 99.00th=[ 265], 99.50th=[ 289], 99.90th=[ 676], 99.95th=[ 881], 00:12:10.374 | 99.99th=[ 1598] 00:12:10.374 bw ( KiB/s): min=18976, max=23808, per=35.03%, avg=22725.33, stdev=1878.99, samples=6 00:12:10.374 iops : min= 4744, max= 5952, avg=5681.33, stdev=469.75, samples=6 00:12:10.374 lat (usec) : 100=0.01%, 250=98.08%, 500=1.76%, 750=0.07%, 1000=0.05% 00:12:10.374 lat (msec) : 2=0.02%, 4=0.01% 00:12:10.374 cpu : usr=1.65%, sys=6.97%, ctx=20167, majf=0, minf=1 00:12:10.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.374 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.374 issued rwts: total=20161,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.374 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76273: Thu Nov 28 07:21:32 2024 00:12:10.374 read: IOPS=3887, BW=15.2MiB/s (15.9MB/s)(59.2MiB/3900msec) 00:12:10.374 slat (usec): min=8, max=15792, avg=17.87, stdev=220.54 00:12:10.374 clat (nsec): min=1288, max=2023.4k, avg=237965.69, stdev=53948.91 00:12:10.374 lat (usec): min=143, max=16065, avg=255.84, stdev=227.23 00:12:10.374 clat percentiles (usec): 00:12:10.374 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 178], 00:12:10.374 | 30.00th=[ 210], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:12:10.374 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:12:10.374 | 99.00th=[ 318], 99.50th=[ 326], 99.90th=[ 469], 99.95th=[ 955], 00:12:10.374 | 99.99th=[ 1352] 00:12:10.374 bw ( KiB/s): min=13992, max=17424, per=23.12%, avg=14998.71, stdev=1393.86, samples=7 00:12:10.374 iops : min= 3498, max= 4356, avg=3749.57, stdev=348.33, samples=7 00:12:10.374 lat (usec) : 2=0.01%, 250=39.17%, 500=60.73%, 750=0.02%, 1000=0.02% 00:12:10.374 lat (msec) : 2=0.04%, 4=0.01% 00:12:10.374 cpu : usr=1.28%, sys=5.10%, ctx=15169, majf=0, minf=1 00:12:10.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.374 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.374 issued rwts: total=15160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.374 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76274: Thu Nov 28 07:21:32 2024 00:12:10.374 read: IOPS=3533, BW=13.8MiB/s (14.5MB/s)(45.5MiB/3297msec) 00:12:10.374 slat (usec): min=7, max=13766, avg=13.28, stdev=146.20 00:12:10.374 clat (usec): min=154, max=7237, avg=268.71, stdev=78.49 00:12:10.374 lat (usec): min=165, max=14006, avg=281.98, stdev=165.74 00:12:10.374 clat percentiles (usec): 00:12:10.374 | 1.00th=[ 210], 5.00th=[ 237], 10.00th=[ 247], 20.00th=[ 253], 00:12:10.374 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 265], 60.00th=[ 269], 00:12:10.374 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 302], 00:12:10.374 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 758], 99.95th=[ 1205], 00:12:10.374 | 99.99th=[ 3392] 00:12:10.374 bw ( KiB/s): min=13992, max=14488, per=21.88%, avg=14196.00, stdev=204.95, samples=6 00:12:10.374 iops : min= 3498, max= 3622, avg=3549.00, stdev=51.24, samples=6 00:12:10.374 lat (usec) : 250=14.52%, 500=85.31%, 750=0.06%, 1000=0.03% 00:12:10.374 lat (msec) : 2=0.05%, 4=0.01%, 10=0.01% 00:12:10.374 cpu : usr=0.67%, sys=4.00%, ctx=11654, majf=0, minf=2 00:12:10.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.374 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.374 issued rwts: total=11650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.374 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=76275: Thu Nov 28 07:21:32 2024 00:12:10.374 read: IOPS=5381, BW=21.0MiB/s (22.0MB/s)(63.6MiB/3026msec) 00:12:10.374 slat (usec): min=10, max=103, avg=14.33, stdev= 4.29 00:12:10.374 clat (usec): min=128, max=3478, avg=170.21, stdev=43.18 00:12:10.374 lat (usec): min=139, max=3506, avg=184.54, stdev=43.49 00:12:10.374 clat percentiles (usec): 00:12:10.374 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:12:10.374 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:12:10.374 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:12:10.374 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 302], 99.95th=[ 396], 00:12:10.374 | 99.99th=[ 2638] 00:12:10.374 bw ( KiB/s): min=21160, max=21768, per=33.24%, avg=21564.00, stdev=221.57, samples=6 00:12:10.374 iops : min= 5290, max= 5442, avg=5391.00, stdev=55.39, samples=6 00:12:10.374 lat (usec) : 250=99.86%, 500=0.09%, 750=0.01% 00:12:10.374 lat (msec) : 2=0.01%, 4=0.02% 00:12:10.374 cpu : usr=1.39%, sys=6.78%, ctx=16296, majf=0, minf=1 00:12:10.374 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:10.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.374 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.374 issued rwts: total=16285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.374 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:10.374 00:12:10.374 Run status group 0 (all jobs): 00:12:10.374 READ: bw=63.4MiB/s (66.4MB/s), 13.8MiB/s-22.0MiB/s (14.5MB/s-23.1MB/s), io=247MiB (259MB), run=3026-3900msec 00:12:10.374 00:12:10.374 Disk stats (read/write): 00:12:10.374 nvme0n1: ios=18925/0, merge=0/0, ticks=3123/0, in_queue=3123, util=95.45% 00:12:10.374 nvme0n2: ios=15020/0, merge=0/0, ticks=3594/0, in_queue=3594, util=95.64% 00:12:10.374 nvme0n3: ios=11007/0, merge=0/0, ticks=2803/0, in_queue=2803, util=96.06% 00:12:10.375 nvme0n4: ios=15453/0, merge=0/0, ticks=2694/0, in_queue=2694, util=96.70% 00:12:10.375 07:21:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:10.375 07:21:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:10.633 07:21:32 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:10.633 07:21:32 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:10.892 07:21:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:10.892 07:21:33 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:11.151 07:21:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:11.151 07:21:33 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:11.411 07:21:33 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:11.411 07:21:33 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:11.669 07:21:33 -- target/fio.sh@69 -- # fio_status=0 00:12:11.669 07:21:33 -- target/fio.sh@70 -- # wait 76226 00:12:11.669 07:21:33 -- target/fio.sh@70 -- # fio_status=4 00:12:11.669 07:21:33 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.669 07:21:33 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.669 07:21:33 -- common/autotest_common.sh@1208 -- # local i=0 00:12:11.669 07:21:33 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:12:11.669 07:21:33 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.669 07:21:33 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.669 07:21:33 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:12:11.669 07:21:33 -- common/autotest_common.sh@1220 -- # return 0 00:12:11.669 07:21:33 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:11.669 nvmf hotplug test: fio failed as expected 00:12:11.669 07:21:33 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:11.669 07:21:33 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.236 07:21:34 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:12.236 07:21:34 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:12.236 07:21:34 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:12.236 07:21:34 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:12.236 07:21:34 -- target/fio.sh@91 -- # nvmftestfini 00:12:12.236 07:21:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:12.236 07:21:34 -- nvmf/common.sh@116 -- # sync 00:12:12.236 07:21:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:12.237 07:21:34 -- nvmf/common.sh@119 -- # set +e 00:12:12.237 07:21:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:12.237 07:21:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:12.237 rmmod nvme_tcp 00:12:12.237 rmmod nvme_fabrics 00:12:12.237 rmmod nvme_keyring 00:12:12.237 07:21:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:12.237 07:21:34 -- nvmf/common.sh@123 -- # set -e 00:12:12.237 07:21:34 -- nvmf/common.sh@124 -- # return 0 00:12:12.237 07:21:34 -- nvmf/common.sh@477 -- # '[' -n 75840 ']' 00:12:12.237 07:21:34 -- nvmf/common.sh@478 -- # killprocess 75840 00:12:12.237 07:21:34 -- common/autotest_common.sh@936 -- # '[' -z 75840 ']' 00:12:12.237 07:21:34 -- common/autotest_common.sh@940 -- # kill -0 75840 00:12:12.237 07:21:34 -- common/autotest_common.sh@941 -- # uname 00:12:12.237 07:21:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:12.237 07:21:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75840 00:12:12.237 killing process with pid 75840 00:12:12.237 07:21:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:12.237 07:21:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:12.237 07:21:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75840' 00:12:12.237 07:21:34 -- common/autotest_common.sh@955 -- # kill 75840 00:12:12.237 07:21:34 -- common/autotest_common.sh@960 -- # wait 75840 00:12:12.496 07:21:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:12.496 07:21:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:12.496 07:21:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:12.496 07:21:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:12.496 07:21:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:12.496 07:21:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.496 07:21:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.496 07:21:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.496 07:21:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:12.496 ************************************ 00:12:12.496 END TEST nvmf_fio_target 00:12:12.496 ************************************ 00:12:12.496 00:12:12.496 real 0m20.046s 00:12:12.496 user 1m15.833s 00:12:12.496 sys 0m10.570s 00:12:12.496 07:21:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:12.496 07:21:34 -- common/autotest_common.sh@10 -- # set +x 00:12:12.496 07:21:34 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:12.496 07:21:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:12.496 07:21:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:12.496 07:21:34 -- common/autotest_common.sh@10 -- # set +x 00:12:12.496 ************************************ 00:12:12.496 START TEST nvmf_bdevio 00:12:12.496 ************************************ 00:12:12.496 07:21:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:12.496 * Looking for test storage... 00:12:12.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:12.496 07:21:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:12.496 07:21:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:12.496 07:21:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:12.496 07:21:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:12.496 07:21:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:12.496 07:21:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:12.496 07:21:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:12.496 07:21:34 -- scripts/common.sh@335 -- # IFS=.-: 00:12:12.496 07:21:34 -- scripts/common.sh@335 -- # read -ra ver1 00:12:12.496 07:21:34 -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.496 07:21:34 -- scripts/common.sh@336 -- # read -ra ver2 00:12:12.496 07:21:34 -- scripts/common.sh@337 -- # local 'op=<' 00:12:12.496 07:21:34 -- scripts/common.sh@339 -- # ver1_l=2 00:12:12.496 07:21:34 -- scripts/common.sh@340 -- # ver2_l=1 00:12:12.496 07:21:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:12.496 07:21:34 -- scripts/common.sh@343 -- # case "$op" in 00:12:12.496 07:21:34 -- scripts/common.sh@344 -- # : 1 00:12:12.496 07:21:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:12.496 07:21:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.496 07:21:34 -- scripts/common.sh@364 -- # decimal 1 00:12:12.496 07:21:34 -- scripts/common.sh@352 -- # local d=1 00:12:12.496 07:21:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.496 07:21:34 -- scripts/common.sh@354 -- # echo 1 00:12:12.496 07:21:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:12.496 07:21:34 -- scripts/common.sh@365 -- # decimal 2 00:12:12.496 07:21:34 -- scripts/common.sh@352 -- # local d=2 00:12:12.496 07:21:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.496 07:21:34 -- scripts/common.sh@354 -- # echo 2 00:12:12.496 07:21:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:12.496 07:21:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:12.496 07:21:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:12.496 07:21:34 -- scripts/common.sh@367 -- # return 0 00:12:12.496 07:21:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.496 07:21:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:12.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.496 --rc genhtml_branch_coverage=1 00:12:12.496 --rc genhtml_function_coverage=1 00:12:12.496 --rc genhtml_legend=1 00:12:12.496 --rc geninfo_all_blocks=1 00:12:12.496 --rc geninfo_unexecuted_blocks=1 00:12:12.496 00:12:12.496 ' 00:12:12.755 07:21:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:12.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.755 --rc genhtml_branch_coverage=1 00:12:12.755 --rc genhtml_function_coverage=1 00:12:12.755 --rc genhtml_legend=1 00:12:12.755 --rc geninfo_all_blocks=1 00:12:12.755 --rc geninfo_unexecuted_blocks=1 00:12:12.755 00:12:12.755 ' 00:12:12.755 07:21:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:12.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.755 --rc genhtml_branch_coverage=1 00:12:12.755 --rc genhtml_function_coverage=1 00:12:12.755 --rc genhtml_legend=1 00:12:12.755 --rc geninfo_all_blocks=1 00:12:12.755 --rc geninfo_unexecuted_blocks=1 00:12:12.755 00:12:12.755 ' 00:12:12.755 07:21:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:12.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.755 --rc genhtml_branch_coverage=1 00:12:12.755 --rc genhtml_function_coverage=1 00:12:12.755 --rc genhtml_legend=1 00:12:12.755 --rc geninfo_all_blocks=1 00:12:12.755 --rc geninfo_unexecuted_blocks=1 00:12:12.755 00:12:12.755 ' 00:12:12.755 07:21:34 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:12.755 07:21:34 -- nvmf/common.sh@7 -- # uname -s 00:12:12.755 07:21:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.755 07:21:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.755 07:21:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.755 07:21:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.755 07:21:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.755 07:21:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.755 07:21:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.755 07:21:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.755 07:21:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.755 07:21:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.755 07:21:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:12:12.755 07:21:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:12:12.755 07:21:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.755 07:21:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.755 07:21:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:12.755 07:21:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:12.755 07:21:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.755 07:21:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.755 07:21:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.756 07:21:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.756 07:21:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.756 07:21:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.756 07:21:34 -- paths/export.sh@5 -- # export PATH 00:12:12.756 07:21:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.756 07:21:34 -- nvmf/common.sh@46 -- # : 0 00:12:12.756 07:21:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:12.756 07:21:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:12.756 07:21:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:12.756 07:21:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.756 07:21:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.756 07:21:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:12.756 07:21:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:12.756 07:21:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:12.756 07:21:34 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:12.756 07:21:34 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:12.756 07:21:34 -- target/bdevio.sh@14 -- # nvmftestinit 00:12:12.756 07:21:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:12.756 07:21:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.756 07:21:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:12.756 07:21:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:12.756 07:21:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:12.756 07:21:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.756 07:21:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.756 07:21:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.756 07:21:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:12.756 07:21:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:12.756 07:21:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:12.756 07:21:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:12.756 07:21:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:12.756 07:21:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:12.756 07:21:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:12.756 07:21:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:12.756 07:21:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:12.756 07:21:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:12.756 07:21:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:12.756 07:21:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:12.756 07:21:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:12.756 07:21:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:12.756 07:21:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:12.756 07:21:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:12.756 07:21:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:12.756 07:21:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:12.756 07:21:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:12.756 07:21:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:12.756 Cannot find device "nvmf_tgt_br" 00:12:12.756 07:21:34 -- nvmf/common.sh@154 -- # true 00:12:12.756 07:21:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:12.756 Cannot find device "nvmf_tgt_br2" 00:12:12.756 07:21:34 -- nvmf/common.sh@155 -- # true 00:12:12.756 07:21:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:12.756 07:21:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:12.756 Cannot find device "nvmf_tgt_br" 00:12:12.756 07:21:34 -- nvmf/common.sh@157 -- # true 00:12:12.756 07:21:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:12.756 Cannot find device "nvmf_tgt_br2" 00:12:12.756 07:21:34 -- nvmf/common.sh@158 -- # true 00:12:12.756 07:21:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:12.756 07:21:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:12.756 07:21:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:12.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.756 07:21:34 -- nvmf/common.sh@161 -- # true 00:12:12.756 07:21:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:12.756 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:12.756 07:21:34 -- nvmf/common.sh@162 -- # true 00:12:12.756 07:21:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:12.756 07:21:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:12.756 07:21:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:12.756 07:21:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:12.756 07:21:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:12.756 07:21:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:12.756 07:21:35 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:12.756 07:21:35 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:12.756 07:21:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:12.756 07:21:35 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:13.017 07:21:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:13.017 07:21:35 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:13.017 07:21:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:13.017 07:21:35 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:13.017 07:21:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:13.017 07:21:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:13.017 07:21:35 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:13.017 07:21:35 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:13.017 07:21:35 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:13.017 07:21:35 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:13.017 07:21:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:13.017 07:21:35 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:13.017 07:21:35 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:13.017 07:21:35 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:13.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:12:13.017 00:12:13.017 --- 10.0.0.2 ping statistics --- 00:12:13.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.017 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:13.017 07:21:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:13.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:13.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:12:13.017 00:12:13.017 --- 10.0.0.3 ping statistics --- 00:12:13.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.017 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:13.017 07:21:35 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:13.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:12:13.017 00:12:13.017 --- 10.0.0.1 ping statistics --- 00:12:13.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.017 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:13.017 07:21:35 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.017 07:21:35 -- nvmf/common.sh@421 -- # return 0 00:12:13.017 07:21:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:13.017 07:21:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.017 07:21:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:13.017 07:21:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:13.017 07:21:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.017 07:21:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:13.017 07:21:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:13.017 07:21:35 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:13.017 07:21:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:13.017 07:21:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.017 07:21:35 -- common/autotest_common.sh@10 -- # set +x 00:12:13.017 07:21:35 -- nvmf/common.sh@469 -- # nvmfpid=76546 00:12:13.017 07:21:35 -- nvmf/common.sh@470 -- # waitforlisten 76546 00:12:13.017 07:21:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:13.017 07:21:35 -- common/autotest_common.sh@829 -- # '[' -z 76546 ']' 00:12:13.017 07:21:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.017 07:21:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.017 07:21:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.017 07:21:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.017 07:21:35 -- common/autotest_common.sh@10 -- # set +x 00:12:13.017 [2024-11-28 07:21:35.202962] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:13.017 [2024-11-28 07:21:35.203054] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.280 [2024-11-28 07:21:35.347763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.280 [2024-11-28 07:21:35.431549] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:13.280 [2024-11-28 07:21:35.431699] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.280 [2024-11-28 07:21:35.431712] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.280 [2024-11-28 07:21:35.431748] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.280 [2024-11-28 07:21:35.431884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:13.280 [2024-11-28 07:21:35.431922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:13.280 [2024-11-28 07:21:35.432524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:13.280 [2024-11-28 07:21:35.432532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.300 07:21:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:14.300 07:21:36 -- common/autotest_common.sh@862 -- # return 0 00:12:14.300 07:21:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:14.300 07:21:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:14.300 07:21:36 -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 07:21:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.300 07:21:36 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:14.300 07:21:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.300 07:21:36 -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 [2024-11-28 07:21:36.273977] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.300 07:21:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.300 07:21:36 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:14.300 07:21:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.300 07:21:36 -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 Malloc0 00:12:14.300 07:21:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.300 07:21:36 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:14.300 07:21:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.300 07:21:36 -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 07:21:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.300 07:21:36 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:14.300 07:21:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.300 07:21:36 -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 07:21:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.300 07:21:36 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.300 07:21:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.300 07:21:36 -- common/autotest_common.sh@10 -- # set +x 00:12:14.300 [2024-11-28 07:21:36.338399] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.300 07:21:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.300 07:21:36 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:14.300 07:21:36 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:14.300 07:21:36 -- nvmf/common.sh@520 -- # config=() 00:12:14.300 07:21:36 -- nvmf/common.sh@520 -- # local subsystem config 00:12:14.301 07:21:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:12:14.301 07:21:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:12:14.301 { 00:12:14.301 "params": { 00:12:14.301 "name": "Nvme$subsystem", 00:12:14.301 "trtype": "$TEST_TRANSPORT", 00:12:14.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.301 "adrfam": "ipv4", 00:12:14.301 "trsvcid": "$NVMF_PORT", 00:12:14.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.301 "hdgst": ${hdgst:-false}, 00:12:14.301 "ddgst": ${ddgst:-false} 00:12:14.301 }, 00:12:14.301 "method": "bdev_nvme_attach_controller" 00:12:14.301 } 00:12:14.301 EOF 00:12:14.301 )") 00:12:14.301 07:21:36 -- nvmf/common.sh@542 -- # cat 00:12:14.301 07:21:36 -- nvmf/common.sh@544 -- # jq . 00:12:14.301 07:21:36 -- nvmf/common.sh@545 -- # IFS=, 00:12:14.301 07:21:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:12:14.301 "params": { 00:12:14.301 "name": "Nvme1", 00:12:14.301 "trtype": "tcp", 00:12:14.301 "traddr": "10.0.0.2", 00:12:14.301 "adrfam": "ipv4", 00:12:14.301 "trsvcid": "4420", 00:12:14.301 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.301 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.301 "hdgst": false, 00:12:14.301 "ddgst": false 00:12:14.301 }, 00:12:14.301 "method": "bdev_nvme_attach_controller" 00:12:14.301 }' 00:12:14.301 [2024-11-28 07:21:36.398160] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:14.301 [2024-11-28 07:21:36.398260] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76582 ] 00:12:14.301 [2024-11-28 07:21:36.541541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:14.581 [2024-11-28 07:21:36.627872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.581 [2024-11-28 07:21:36.627993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.581 [2024-11-28 07:21:36.628001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.581 [2024-11-28 07:21:36.805089] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:12:14.581 [2024-11-28 07:21:36.805406] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:12:14.581 I/O targets: 00:12:14.581 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:14.581 00:12:14.581 00:12:14.581 CUnit - A unit testing framework for C - Version 2.1-3 00:12:14.581 http://cunit.sourceforge.net/ 00:12:14.581 00:12:14.581 00:12:14.581 Suite: bdevio tests on: Nvme1n1 00:12:14.581 Test: blockdev write read block ...passed 00:12:14.581 Test: blockdev write zeroes read block ...passed 00:12:14.581 Test: blockdev write zeroes read no split ...passed 00:12:14.581 Test: blockdev write zeroes read split ...passed 00:12:14.581 Test: blockdev write zeroes read split partial ...passed 00:12:14.581 Test: blockdev reset ...[2024-11-28 07:21:36.840296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:14.581 [2024-11-28 07:21:36.840498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a7ea0 (9): Bad file descriptor 00:12:14.581 [2024-11-28 07:21:36.854800] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:14.581 passed 00:12:14.581 Test: blockdev write read 8 blocks ...passed 00:12:14.840 Test: blockdev write read size > 128k ...passed 00:12:14.840 Test: blockdev write read invalid size ...passed 00:12:14.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:14.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:14.840 Test: blockdev write read max offset ...passed 00:12:14.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:14.840 Test: blockdev writev readv 8 blocks ...passed 00:12:14.840 Test: blockdev writev readv 30 x 1block ...passed 00:12:14.840 Test: blockdev writev readv block ...passed 00:12:14.840 Test: blockdev writev readv size > 128k ...passed 00:12:14.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:14.840 Test: blockdev comparev and writev ...[2024-11-28 07:21:36.863138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:14.840 [2024-11-28 07:21:36.863203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:14.840 [2024-11-28 07:21:36.863234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:14.840 [2024-11-28 07:21:36.863250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:14.840 [2024-11-28 07:21:36.863626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:14.840 [2024-11-28 07:21:36.863667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:14.840 [2024-11-28 07:21:36.863692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:14.840 [2024-11-28 07:21:36.863708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:14.840 [2024-11-28 07:21:36.864097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:14.840 [2024-11-28 07:21:36.864139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:14.840 [2024-11-28 07:21:36.864165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:14.840 [2024-11-28 07:21:36.864179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:14.840 [2024-11-28 07:21:36.864679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:14.840 [2024-11-28 07:21:36.864717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:14.840 [2024-11-28 07:21:36.864742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:14.840 [2024-11-28 07:21:36.864756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:14.840 passed 00:12:14.840 Test: blockdev nvme passthru rw ...passed 00:12:14.840 Test: blockdev nvme passthru vendor specific ...[2024-11-28 07:21:36.865672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:14.840 [2024-11-28 07:21:36.865705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:14.840 [2024-11-28 07:21:36.865832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:14.840 [2024-11-28 07:21:36.865861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:14.840 [2024-11-28 07:21:36.865993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:14.840 [2024-11-28 07:21:36.866022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:14.840 [2024-11-28 07:21:36.866147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:14.840 [2024-11-28 07:21:36.866176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:14.840 passed 00:12:14.840 Test: blockdev nvme admin passthru ...passed 00:12:14.840 Test: blockdev copy ...passed 00:12:14.840 00:12:14.840 Run Summary: Type Total Ran Passed Failed Inactive 00:12:14.840 suites 1 1 n/a 0 0 00:12:14.840 tests 23 23 23 0 0 00:12:14.840 asserts 152 152 152 0 n/a 00:12:14.840 00:12:14.840 Elapsed time = 0.151 seconds 00:12:14.840 07:21:37 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.840 07:21:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.840 07:21:37 -- common/autotest_common.sh@10 -- # set +x 00:12:14.840 07:21:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.840 07:21:37 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:14.840 07:21:37 -- target/bdevio.sh@30 -- # nvmftestfini 00:12:14.840 07:21:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:14.840 07:21:37 -- nvmf/common.sh@116 -- # sync 00:12:15.099 07:21:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:15.099 07:21:37 -- nvmf/common.sh@119 -- # set +e 00:12:15.099 07:21:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:15.099 07:21:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:15.099 rmmod nvme_tcp 00:12:15.099 rmmod nvme_fabrics 00:12:15.099 rmmod nvme_keyring 00:12:15.099 07:21:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:15.099 07:21:37 -- nvmf/common.sh@123 -- # set -e 00:12:15.099 07:21:37 -- nvmf/common.sh@124 -- # return 0 00:12:15.099 07:21:37 -- nvmf/common.sh@477 -- # '[' -n 76546 ']' 00:12:15.099 07:21:37 -- nvmf/common.sh@478 -- # killprocess 76546 00:12:15.099 07:21:37 -- common/autotest_common.sh@936 -- # '[' -z 76546 ']' 00:12:15.099 07:21:37 -- common/autotest_common.sh@940 -- # kill -0 76546 00:12:15.099 07:21:37 -- common/autotest_common.sh@941 -- # uname 00:12:15.099 07:21:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:15.099 07:21:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76546 00:12:15.099 07:21:37 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:12:15.099 07:21:37 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:12:15.099 killing process with pid 76546 00:12:15.099 07:21:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76546' 00:12:15.099 07:21:37 -- common/autotest_common.sh@955 -- # kill 76546 00:12:15.099 07:21:37 -- common/autotest_common.sh@960 -- # wait 76546 00:12:15.359 07:21:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:15.359 07:21:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:15.359 07:21:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:15.359 07:21:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:15.359 07:21:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:15.359 07:21:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.359 07:21:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.359 07:21:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.359 07:21:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:15.359 00:12:15.359 real 0m2.922s 00:12:15.359 user 0m9.563s 00:12:15.359 sys 0m0.799s 00:12:15.359 07:21:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:15.359 07:21:37 -- common/autotest_common.sh@10 -- # set +x 00:12:15.359 ************************************ 00:12:15.359 END TEST nvmf_bdevio 00:12:15.359 ************************************ 00:12:15.359 07:21:37 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:12:15.359 07:21:37 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:15.359 07:21:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:15.359 07:21:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:15.359 07:21:37 -- common/autotest_common.sh@10 -- # set +x 00:12:15.359 ************************************ 00:12:15.359 START TEST nvmf_bdevio_no_huge 00:12:15.359 ************************************ 00:12:15.359 07:21:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:15.619 * Looking for test storage... 00:12:15.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:15.619 07:21:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:15.619 07:21:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:15.619 07:21:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:15.619 07:21:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:15.619 07:21:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:15.619 07:21:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:15.619 07:21:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:15.619 07:21:37 -- scripts/common.sh@335 -- # IFS=.-: 00:12:15.619 07:21:37 -- scripts/common.sh@335 -- # read -ra ver1 00:12:15.619 07:21:37 -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.619 07:21:37 -- scripts/common.sh@336 -- # read -ra ver2 00:12:15.619 07:21:37 -- scripts/common.sh@337 -- # local 'op=<' 00:12:15.619 07:21:37 -- scripts/common.sh@339 -- # ver1_l=2 00:12:15.619 07:21:37 -- scripts/common.sh@340 -- # ver2_l=1 00:12:15.619 07:21:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:15.619 07:21:37 -- scripts/common.sh@343 -- # case "$op" in 00:12:15.619 07:21:37 -- scripts/common.sh@344 -- # : 1 00:12:15.619 07:21:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:15.619 07:21:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.619 07:21:37 -- scripts/common.sh@364 -- # decimal 1 00:12:15.619 07:21:37 -- scripts/common.sh@352 -- # local d=1 00:12:15.619 07:21:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.619 07:21:37 -- scripts/common.sh@354 -- # echo 1 00:12:15.619 07:21:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:15.619 07:21:37 -- scripts/common.sh@365 -- # decimal 2 00:12:15.619 07:21:37 -- scripts/common.sh@352 -- # local d=2 00:12:15.619 07:21:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.619 07:21:37 -- scripts/common.sh@354 -- # echo 2 00:12:15.619 07:21:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:15.619 07:21:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:15.619 07:21:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:15.619 07:21:37 -- scripts/common.sh@367 -- # return 0 00:12:15.619 07:21:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.619 07:21:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:15.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.619 --rc genhtml_branch_coverage=1 00:12:15.619 --rc genhtml_function_coverage=1 00:12:15.619 --rc genhtml_legend=1 00:12:15.619 --rc geninfo_all_blocks=1 00:12:15.619 --rc geninfo_unexecuted_blocks=1 00:12:15.619 00:12:15.619 ' 00:12:15.619 07:21:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:15.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.619 --rc genhtml_branch_coverage=1 00:12:15.619 --rc genhtml_function_coverage=1 00:12:15.619 --rc genhtml_legend=1 00:12:15.619 --rc geninfo_all_blocks=1 00:12:15.619 --rc geninfo_unexecuted_blocks=1 00:12:15.619 00:12:15.619 ' 00:12:15.619 07:21:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:15.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.619 --rc genhtml_branch_coverage=1 00:12:15.619 --rc genhtml_function_coverage=1 00:12:15.619 --rc genhtml_legend=1 00:12:15.619 --rc geninfo_all_blocks=1 00:12:15.619 --rc geninfo_unexecuted_blocks=1 00:12:15.619 00:12:15.619 ' 00:12:15.619 07:21:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:15.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.619 --rc genhtml_branch_coverage=1 00:12:15.619 --rc genhtml_function_coverage=1 00:12:15.619 --rc genhtml_legend=1 00:12:15.619 --rc geninfo_all_blocks=1 00:12:15.619 --rc geninfo_unexecuted_blocks=1 00:12:15.619 00:12:15.619 ' 00:12:15.619 07:21:37 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.619 07:21:37 -- nvmf/common.sh@7 -- # uname -s 00:12:15.619 07:21:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.619 07:21:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.619 07:21:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.619 07:21:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.619 07:21:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.619 07:21:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.619 07:21:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.619 07:21:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.619 07:21:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.619 07:21:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.619 07:21:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:12:15.619 07:21:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:12:15.619 07:21:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.619 07:21:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.619 07:21:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:15.619 07:21:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.619 07:21:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.619 07:21:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.619 07:21:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.620 07:21:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.620 07:21:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.620 07:21:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.620 07:21:37 -- paths/export.sh@5 -- # export PATH 00:12:15.620 07:21:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.620 07:21:37 -- nvmf/common.sh@46 -- # : 0 00:12:15.620 07:21:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:15.620 07:21:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:15.620 07:21:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:15.620 07:21:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.620 07:21:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.620 07:21:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:15.620 07:21:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:15.620 07:21:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:15.620 07:21:37 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.620 07:21:37 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.620 07:21:37 -- target/bdevio.sh@14 -- # nvmftestinit 00:12:15.620 07:21:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:15.620 07:21:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.620 07:21:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:15.620 07:21:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:15.620 07:21:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:15.620 07:21:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.620 07:21:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.620 07:21:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.620 07:21:37 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:15.620 07:21:37 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:15.620 07:21:37 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:15.620 07:21:37 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:15.620 07:21:37 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:15.620 07:21:37 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:15.620 07:21:37 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.620 07:21:37 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.620 07:21:37 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:15.620 07:21:37 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:15.620 07:21:37 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:15.620 07:21:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:15.620 07:21:37 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:15.620 07:21:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.620 07:21:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:15.620 07:21:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:15.620 07:21:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:15.620 07:21:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:15.620 07:21:37 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:15.620 07:21:37 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:15.620 Cannot find device "nvmf_tgt_br" 00:12:15.620 07:21:37 -- nvmf/common.sh@154 -- # true 00:12:15.620 07:21:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:15.620 Cannot find device "nvmf_tgt_br2" 00:12:15.620 07:21:37 -- nvmf/common.sh@155 -- # true 00:12:15.620 07:21:37 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:15.620 07:21:37 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:15.620 Cannot find device "nvmf_tgt_br" 00:12:15.620 07:21:37 -- nvmf/common.sh@157 -- # true 00:12:15.620 07:21:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:15.620 Cannot find device "nvmf_tgt_br2" 00:12:15.620 07:21:37 -- nvmf/common.sh@158 -- # true 00:12:15.620 07:21:37 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:15.620 07:21:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:15.880 07:21:37 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.880 07:21:37 -- nvmf/common.sh@161 -- # true 00:12:15.880 07:21:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.880 07:21:37 -- nvmf/common.sh@162 -- # true 00:12:15.880 07:21:37 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:15.880 07:21:37 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:15.880 07:21:37 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:15.880 07:21:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:15.880 07:21:37 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:15.880 07:21:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:15.880 07:21:37 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:15.880 07:21:37 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:15.880 07:21:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:15.880 07:21:37 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:15.880 07:21:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:15.880 07:21:37 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:15.880 07:21:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:15.880 07:21:37 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:15.880 07:21:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:15.880 07:21:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:15.880 07:21:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:15.880 07:21:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:15.880 07:21:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:15.880 07:21:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:15.880 07:21:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:15.880 07:21:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:15.880 07:21:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:15.880 07:21:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:15.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:12:15.880 00:12:15.880 --- 10.0.0.2 ping statistics --- 00:12:15.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.880 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:15.880 07:21:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:15.880 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:15.880 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:15.880 00:12:15.880 --- 10.0.0.3 ping statistics --- 00:12:15.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.880 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:15.880 07:21:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:15.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:12:15.880 00:12:15.880 --- 10.0.0.1 ping statistics --- 00:12:15.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.880 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:12:15.880 07:21:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.880 07:21:38 -- nvmf/common.sh@421 -- # return 0 00:12:15.880 07:21:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:15.880 07:21:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.880 07:21:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:15.880 07:21:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:15.880 07:21:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.880 07:21:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:15.880 07:21:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:15.880 07:21:38 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:15.880 07:21:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:15.880 07:21:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:15.880 07:21:38 -- common/autotest_common.sh@10 -- # set +x 00:12:15.880 07:21:38 -- nvmf/common.sh@469 -- # nvmfpid=76767 00:12:15.880 07:21:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:15.880 07:21:38 -- nvmf/common.sh@470 -- # waitforlisten 76767 00:12:15.880 07:21:38 -- common/autotest_common.sh@829 -- # '[' -z 76767 ']' 00:12:15.880 07:21:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.880 07:21:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:15.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.880 07:21:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.880 07:21:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:15.880 07:21:38 -- common/autotest_common.sh@10 -- # set +x 00:12:15.880 [2024-11-28 07:21:38.140177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:15.880 [2024-11-28 07:21:38.140276] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:16.139 [2024-11-28 07:21:38.285687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.139 [2024-11-28 07:21:38.383993] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:16.139 [2024-11-28 07:21:38.384182] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.139 [2024-11-28 07:21:38.384214] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.139 [2024-11-28 07:21:38.384238] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.139 [2024-11-28 07:21:38.384411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:16.139 [2024-11-28 07:21:38.384828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:16.139 [2024-11-28 07:21:38.385036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:16.139 [2024-11-28 07:21:38.385107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.077 07:21:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.077 07:21:39 -- common/autotest_common.sh@862 -- # return 0 00:12:17.077 07:21:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:17.077 07:21:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:17.077 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:17.077 07:21:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.077 07:21:39 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.077 07:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.077 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:17.077 [2024-11-28 07:21:39.201787] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.077 07:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.077 07:21:39 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:17.077 07:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.077 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:17.077 Malloc0 00:12:17.077 07:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.077 07:21:39 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:17.077 07:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.077 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:17.077 07:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.077 07:21:39 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:17.077 07:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.077 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:17.077 07:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.077 07:21:39 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.077 07:21:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.077 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:12:17.077 [2024-11-28 07:21:39.243035] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.077 07:21:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.077 07:21:39 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:17.077 07:21:39 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:17.077 07:21:39 -- nvmf/common.sh@520 -- # config=() 00:12:17.077 07:21:39 -- nvmf/common.sh@520 -- # local subsystem config 00:12:17.077 07:21:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:12:17.077 07:21:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:12:17.077 { 00:12:17.077 "params": { 00:12:17.077 "name": "Nvme$subsystem", 00:12:17.077 "trtype": "$TEST_TRANSPORT", 00:12:17.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:17.077 "adrfam": "ipv4", 00:12:17.077 "trsvcid": "$NVMF_PORT", 00:12:17.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:17.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:17.077 "hdgst": ${hdgst:-false}, 00:12:17.077 "ddgst": ${ddgst:-false} 00:12:17.077 }, 00:12:17.077 "method": "bdev_nvme_attach_controller" 00:12:17.077 } 00:12:17.077 EOF 00:12:17.077 )") 00:12:17.077 07:21:39 -- nvmf/common.sh@542 -- # cat 00:12:17.077 07:21:39 -- nvmf/common.sh@544 -- # jq . 00:12:17.077 07:21:39 -- nvmf/common.sh@545 -- # IFS=, 00:12:17.077 07:21:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:12:17.077 "params": { 00:12:17.077 "name": "Nvme1", 00:12:17.077 "trtype": "tcp", 00:12:17.077 "traddr": "10.0.0.2", 00:12:17.077 "adrfam": "ipv4", 00:12:17.077 "trsvcid": "4420", 00:12:17.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:17.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:17.077 "hdgst": false, 00:12:17.077 "ddgst": false 00:12:17.077 }, 00:12:17.077 "method": "bdev_nvme_attach_controller" 00:12:17.077 }' 00:12:17.077 [2024-11-28 07:21:39.318110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:17.077 [2024-11-28 07:21:39.318210] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76808 ] 00:12:17.337 [2024-11-28 07:21:39.463111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:17.337 [2024-11-28 07:21:39.587028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.337 [2024-11-28 07:21:39.587148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.337 [2024-11-28 07:21:39.587156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.597 [2024-11-28 07:21:39.757509] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:12:17.597 [2024-11-28 07:21:39.757555] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:12:17.597 I/O targets: 00:12:17.597 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:17.597 00:12:17.597 00:12:17.597 CUnit - A unit testing framework for C - Version 2.1-3 00:12:17.597 http://cunit.sourceforge.net/ 00:12:17.597 00:12:17.597 00:12:17.597 Suite: bdevio tests on: Nvme1n1 00:12:17.597 Test: blockdev write read block ...passed 00:12:17.597 Test: blockdev write zeroes read block ...passed 00:12:17.597 Test: blockdev write zeroes read no split ...passed 00:12:17.597 Test: blockdev write zeroes read split ...passed 00:12:17.597 Test: blockdev write zeroes read split partial ...passed 00:12:17.597 Test: blockdev reset ...[2024-11-28 07:21:39.798435] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:17.597 [2024-11-28 07:21:39.798531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163e260 (9): Bad file descriptor 00:12:17.597 [2024-11-28 07:21:39.815482] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:17.597 passed 00:12:17.597 Test: blockdev write read 8 blocks ...passed 00:12:17.597 Test: blockdev write read size > 128k ...passed 00:12:17.597 Test: blockdev write read invalid size ...passed 00:12:17.597 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:17.598 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:17.598 Test: blockdev write read max offset ...passed 00:12:17.598 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:17.598 Test: blockdev writev readv 8 blocks ...passed 00:12:17.598 Test: blockdev writev readv 30 x 1block ...passed 00:12:17.598 Test: blockdev writev readv block ...passed 00:12:17.598 Test: blockdev writev readv size > 128k ...passed 00:12:17.598 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:17.598 Test: blockdev comparev and writev ...[2024-11-28 07:21:39.823781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.598 [2024-11-28 07:21:39.823834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:17.598 [2024-11-28 07:21:39.823860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.598 [2024-11-28 07:21:39.823874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:17.598 [2024-11-28 07:21:39.824175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.598 [2024-11-28 07:21:39.824204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:17.598 [2024-11-28 07:21:39.824225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.598 [2024-11-28 07:21:39.824238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:17.598 [2024-11-28 07:21:39.824530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.598 [2024-11-28 07:21:39.824551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:17.598 [2024-11-28 07:21:39.824573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.598 [2024-11-28 07:21:39.824586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:17.598 [2024-11-28 07:21:39.824877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.598 [2024-11-28 07:21:39.824902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:17.598 [2024-11-28 07:21:39.824923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:17.598 [2024-11-28 07:21:39.824936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:17.598 passed 00:12:17.598 Test: blockdev nvme passthru rw ...passed 00:12:17.598 Test: blockdev nvme passthru vendor specific ...[2024-11-28 07:21:39.825741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.598 [2024-11-28 07:21:39.825769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:17.598 [2024-11-28 07:21:39.825892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.598 [2024-11-28 07:21:39.825910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:17.598 [2024-11-28 07:21:39.826025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.598 [2024-11-28 07:21:39.826042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:17.598 [2024-11-28 07:21:39.826165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:17.598 [2024-11-28 07:21:39.826182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:17.598 passed 00:12:17.598 Test: blockdev nvme admin passthru ...passed 00:12:17.598 Test: blockdev copy ...passed 00:12:17.598 00:12:17.598 Run Summary: Type Total Ran Passed Failed Inactive 00:12:17.598 suites 1 1 n/a 0 0 00:12:17.598 tests 23 23 23 0 0 00:12:17.598 asserts 152 152 152 0 n/a 00:12:17.598 00:12:17.598 Elapsed time = 0.165 seconds 00:12:18.165 07:21:40 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:18.165 07:21:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.165 07:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:18.165 07:21:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.165 07:21:40 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:18.165 07:21:40 -- target/bdevio.sh@30 -- # nvmftestfini 00:12:18.165 07:21:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:18.165 07:21:40 -- nvmf/common.sh@116 -- # sync 00:12:18.165 07:21:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:18.165 07:21:40 -- nvmf/common.sh@119 -- # set +e 00:12:18.165 07:21:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:18.165 07:21:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:18.165 rmmod nvme_tcp 00:12:18.165 rmmod nvme_fabrics 00:12:18.165 rmmod nvme_keyring 00:12:18.166 07:21:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:18.166 07:21:40 -- nvmf/common.sh@123 -- # set -e 00:12:18.166 07:21:40 -- nvmf/common.sh@124 -- # return 0 00:12:18.166 07:21:40 -- nvmf/common.sh@477 -- # '[' -n 76767 ']' 00:12:18.166 07:21:40 -- nvmf/common.sh@478 -- # killprocess 76767 00:12:18.166 07:21:40 -- common/autotest_common.sh@936 -- # '[' -z 76767 ']' 00:12:18.166 07:21:40 -- common/autotest_common.sh@940 -- # kill -0 76767 00:12:18.166 07:21:40 -- common/autotest_common.sh@941 -- # uname 00:12:18.166 07:21:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:18.166 07:21:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76767 00:12:18.166 07:21:40 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:12:18.166 07:21:40 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:12:18.166 killing process with pid 76767 00:12:18.166 07:21:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76767' 00:12:18.166 07:21:40 -- common/autotest_common.sh@955 -- # kill 76767 00:12:18.166 07:21:40 -- common/autotest_common.sh@960 -- # wait 76767 00:12:18.733 07:21:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:18.733 07:21:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:18.733 07:21:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:18.733 07:21:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.733 07:21:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:18.734 07:21:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.734 07:21:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.734 07:21:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.734 07:21:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:18.734 00:12:18.734 real 0m3.182s 00:12:18.734 user 0m10.507s 00:12:18.734 sys 0m1.304s 00:12:18.734 07:21:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:18.734 07:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:18.734 ************************************ 00:12:18.734 END TEST nvmf_bdevio_no_huge 00:12:18.734 ************************************ 00:12:18.734 07:21:40 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:18.734 07:21:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:18.734 07:21:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:18.734 07:21:40 -- common/autotest_common.sh@10 -- # set +x 00:12:18.734 ************************************ 00:12:18.734 START TEST nvmf_tls 00:12:18.734 ************************************ 00:12:18.734 07:21:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:18.734 * Looking for test storage... 00:12:18.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:18.734 07:21:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:18.734 07:21:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:18.734 07:21:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:18.993 07:21:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:18.993 07:21:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:18.993 07:21:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:18.993 07:21:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:18.993 07:21:41 -- scripts/common.sh@335 -- # IFS=.-: 00:12:18.993 07:21:41 -- scripts/common.sh@335 -- # read -ra ver1 00:12:18.993 07:21:41 -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.993 07:21:41 -- scripts/common.sh@336 -- # read -ra ver2 00:12:18.993 07:21:41 -- scripts/common.sh@337 -- # local 'op=<' 00:12:18.993 07:21:41 -- scripts/common.sh@339 -- # ver1_l=2 00:12:18.993 07:21:41 -- scripts/common.sh@340 -- # ver2_l=1 00:12:18.993 07:21:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:18.993 07:21:41 -- scripts/common.sh@343 -- # case "$op" in 00:12:18.993 07:21:41 -- scripts/common.sh@344 -- # : 1 00:12:18.993 07:21:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:18.993 07:21:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.993 07:21:41 -- scripts/common.sh@364 -- # decimal 1 00:12:18.993 07:21:41 -- scripts/common.sh@352 -- # local d=1 00:12:18.993 07:21:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.993 07:21:41 -- scripts/common.sh@354 -- # echo 1 00:12:18.993 07:21:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:18.993 07:21:41 -- scripts/common.sh@365 -- # decimal 2 00:12:18.993 07:21:41 -- scripts/common.sh@352 -- # local d=2 00:12:18.993 07:21:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.993 07:21:41 -- scripts/common.sh@354 -- # echo 2 00:12:18.993 07:21:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:18.993 07:21:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:18.993 07:21:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:18.993 07:21:41 -- scripts/common.sh@367 -- # return 0 00:12:18.993 07:21:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.993 07:21:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.993 --rc genhtml_branch_coverage=1 00:12:18.993 --rc genhtml_function_coverage=1 00:12:18.993 --rc genhtml_legend=1 00:12:18.993 --rc geninfo_all_blocks=1 00:12:18.993 --rc geninfo_unexecuted_blocks=1 00:12:18.993 00:12:18.993 ' 00:12:18.993 07:21:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.993 --rc genhtml_branch_coverage=1 00:12:18.993 --rc genhtml_function_coverage=1 00:12:18.993 --rc genhtml_legend=1 00:12:18.993 --rc geninfo_all_blocks=1 00:12:18.993 --rc geninfo_unexecuted_blocks=1 00:12:18.993 00:12:18.993 ' 00:12:18.993 07:21:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:18.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.993 --rc genhtml_branch_coverage=1 00:12:18.993 --rc genhtml_function_coverage=1 00:12:18.994 --rc genhtml_legend=1 00:12:18.994 --rc geninfo_all_blocks=1 00:12:18.994 --rc geninfo_unexecuted_blocks=1 00:12:18.994 00:12:18.994 ' 00:12:18.994 07:21:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:18.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.994 --rc genhtml_branch_coverage=1 00:12:18.994 --rc genhtml_function_coverage=1 00:12:18.994 --rc genhtml_legend=1 00:12:18.994 --rc geninfo_all_blocks=1 00:12:18.994 --rc geninfo_unexecuted_blocks=1 00:12:18.994 00:12:18.994 ' 00:12:18.994 07:21:41 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:18.994 07:21:41 -- nvmf/common.sh@7 -- # uname -s 00:12:18.994 07:21:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.994 07:21:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.994 07:21:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.994 07:21:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.994 07:21:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.994 07:21:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.994 07:21:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.994 07:21:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.994 07:21:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.994 07:21:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.994 07:21:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:12:18.994 07:21:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:12:18.994 07:21:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.994 07:21:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.994 07:21:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:18.994 07:21:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:18.994 07:21:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.994 07:21:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.994 07:21:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.994 07:21:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.994 07:21:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.994 07:21:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.994 07:21:41 -- paths/export.sh@5 -- # export PATH 00:12:18.994 07:21:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.994 07:21:41 -- nvmf/common.sh@46 -- # : 0 00:12:18.994 07:21:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:18.994 07:21:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:18.994 07:21:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:18.994 07:21:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.994 07:21:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.994 07:21:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:18.994 07:21:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:18.994 07:21:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:18.994 07:21:41 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:18.994 07:21:41 -- target/tls.sh@71 -- # nvmftestinit 00:12:18.994 07:21:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:18.994 07:21:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.994 07:21:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:18.994 07:21:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:18.994 07:21:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:18.994 07:21:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.994 07:21:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.994 07:21:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.994 07:21:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:18.994 07:21:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:18.994 07:21:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:18.994 07:21:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:18.994 07:21:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:18.994 07:21:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:18.994 07:21:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.994 07:21:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.994 07:21:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:18.994 07:21:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:18.994 07:21:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:18.994 07:21:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:18.994 07:21:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:18.994 07:21:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.994 07:21:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:18.994 07:21:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:18.994 07:21:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:18.994 07:21:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:18.994 07:21:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:18.994 07:21:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:18.994 Cannot find device "nvmf_tgt_br" 00:12:18.994 07:21:41 -- nvmf/common.sh@154 -- # true 00:12:18.994 07:21:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:18.994 Cannot find device "nvmf_tgt_br2" 00:12:18.994 07:21:41 -- nvmf/common.sh@155 -- # true 00:12:18.994 07:21:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:18.994 07:21:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:18.994 Cannot find device "nvmf_tgt_br" 00:12:18.994 07:21:41 -- nvmf/common.sh@157 -- # true 00:12:18.994 07:21:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:18.994 Cannot find device "nvmf_tgt_br2" 00:12:18.994 07:21:41 -- nvmf/common.sh@158 -- # true 00:12:18.994 07:21:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:18.994 07:21:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:18.994 07:21:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:18.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.994 07:21:41 -- nvmf/common.sh@161 -- # true 00:12:18.994 07:21:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:18.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:18.994 07:21:41 -- nvmf/common.sh@162 -- # true 00:12:18.994 07:21:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:18.994 07:21:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:18.994 07:21:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:18.994 07:21:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:18.994 07:21:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:18.994 07:21:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:19.254 07:21:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:19.254 07:21:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:19.254 07:21:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:19.254 07:21:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:19.254 07:21:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:19.254 07:21:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:19.254 07:21:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:19.254 07:21:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:19.254 07:21:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:19.254 07:21:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:19.254 07:21:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:19.254 07:21:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:19.254 07:21:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:19.254 07:21:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:19.254 07:21:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:19.254 07:21:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:19.254 07:21:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:19.254 07:21:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:19.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:12:19.254 00:12:19.254 --- 10.0.0.2 ping statistics --- 00:12:19.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.254 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:19.254 07:21:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:19.254 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:19.254 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:12:19.254 00:12:19.254 --- 10.0.0.3 ping statistics --- 00:12:19.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.254 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:19.254 07:21:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:19.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:19.254 00:12:19.254 --- 10.0.0.1 ping statistics --- 00:12:19.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.254 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:19.254 07:21:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.254 07:21:41 -- nvmf/common.sh@421 -- # return 0 00:12:19.254 07:21:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:19.254 07:21:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.254 07:21:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:19.254 07:21:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:19.254 07:21:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.254 07:21:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:19.254 07:21:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:19.254 07:21:41 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:19.254 07:21:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:19.254 07:21:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:19.254 07:21:41 -- common/autotest_common.sh@10 -- # set +x 00:12:19.254 07:21:41 -- nvmf/common.sh@469 -- # nvmfpid=76994 00:12:19.254 07:21:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:19.254 07:21:41 -- nvmf/common.sh@470 -- # waitforlisten 76994 00:12:19.254 07:21:41 -- common/autotest_common.sh@829 -- # '[' -z 76994 ']' 00:12:19.254 07:21:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.254 07:21:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.254 07:21:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.254 07:21:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.254 07:21:41 -- common/autotest_common.sh@10 -- # set +x 00:12:19.254 [2024-11-28 07:21:41.474781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:19.254 [2024-11-28 07:21:41.474866] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.513 [2024-11-28 07:21:41.615990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.513 [2024-11-28 07:21:41.698074] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:19.513 [2024-11-28 07:21:41.698243] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.513 [2024-11-28 07:21:41.698258] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.513 [2024-11-28 07:21:41.698269] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.513 [2024-11-28 07:21:41.698327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.450 07:21:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:20.450 07:21:42 -- common/autotest_common.sh@862 -- # return 0 00:12:20.450 07:21:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:20.450 07:21:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:20.450 07:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:20.450 07:21:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.450 07:21:42 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:12:20.450 07:21:42 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:20.450 true 00:12:20.709 07:21:42 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:20.709 07:21:42 -- target/tls.sh@82 -- # jq -r .tls_version 00:12:20.968 07:21:43 -- target/tls.sh@82 -- # version=0 00:12:20.968 07:21:43 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:12:20.968 07:21:43 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:21.228 07:21:43 -- target/tls.sh@90 -- # jq -r .tls_version 00:12:21.228 07:21:43 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:21.228 07:21:43 -- target/tls.sh@90 -- # version=13 00:12:21.228 07:21:43 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:12:21.228 07:21:43 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:21.486 07:21:43 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:21.486 07:21:43 -- target/tls.sh@98 -- # jq -r .tls_version 00:12:21.744 07:21:43 -- target/tls.sh@98 -- # version=7 00:12:21.744 07:21:43 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:12:21.744 07:21:43 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:21.744 07:21:43 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:12:22.003 07:21:44 -- target/tls.sh@105 -- # ktls=false 00:12:22.003 07:21:44 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:12:22.003 07:21:44 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:22.261 07:21:44 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:22.261 07:21:44 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:12:22.520 07:21:44 -- target/tls.sh@113 -- # ktls=true 00:12:22.520 07:21:44 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:12:22.520 07:21:44 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:22.779 07:21:44 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:22.779 07:21:44 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:12:23.038 07:21:45 -- target/tls.sh@121 -- # ktls=false 00:12:23.038 07:21:45 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:12:23.038 07:21:45 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:12:23.038 07:21:45 -- target/tls.sh@49 -- # local key hash crc 00:12:23.038 07:21:45 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:12:23.038 07:21:45 -- target/tls.sh@51 -- # hash=01 00:12:23.038 07:21:45 -- target/tls.sh@52 -- # gzip -1 -c 00:12:23.038 07:21:45 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:12:23.038 07:21:45 -- target/tls.sh@52 -- # tail -c8 00:12:23.038 07:21:45 -- target/tls.sh@52 -- # head -c 4 00:12:23.038 07:21:45 -- target/tls.sh@52 -- # crc='p$H�' 00:12:23.038 07:21:45 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:23.038 07:21:45 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:12:23.038 07:21:45 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:23.038 07:21:45 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:23.038 07:21:45 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:12:23.038 07:21:45 -- target/tls.sh@49 -- # local key hash crc 00:12:23.038 07:21:45 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:12:23.038 07:21:45 -- target/tls.sh@51 -- # hash=01 00:12:23.038 07:21:45 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:12:23.038 07:21:45 -- target/tls.sh@52 -- # gzip -1 -c 00:12:23.038 07:21:45 -- target/tls.sh@52 -- # tail -c8 00:12:23.038 07:21:45 -- target/tls.sh@52 -- # head -c 4 00:12:23.038 07:21:45 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:12:23.038 07:21:45 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:23.038 07:21:45 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:12:23.038 07:21:45 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:23.038 07:21:45 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:23.038 07:21:45 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:23.038 07:21:45 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:23.038 07:21:45 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:23.038 07:21:45 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:23.038 07:21:45 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:23.038 07:21:45 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:23.038 07:21:45 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:23.294 07:21:45 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:23.860 07:21:45 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:23.860 07:21:45 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:23.860 07:21:45 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:23.860 [2024-11-28 07:21:46.114203] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.860 07:21:46 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:24.118 07:21:46 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:24.377 [2024-11-28 07:21:46.598311] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:24.377 [2024-11-28 07:21:46.598558] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.377 07:21:46 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:24.636 malloc0 00:12:24.636 07:21:46 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:24.896 07:21:47 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:25.160 07:21:47 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:37.373 Initializing NVMe Controllers 00:12:37.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:37.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:37.373 Initialization complete. Launching workers. 00:12:37.373 ======================================================== 00:12:37.373 Latency(us) 00:12:37.373 Device Information : IOPS MiB/s Average min max 00:12:37.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9951.03 38.87 6433.02 1396.71 9107.48 00:12:37.373 ======================================================== 00:12:37.373 Total : 9951.03 38.87 6433.02 1396.71 9107.48 00:12:37.373 00:12:37.373 07:21:57 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:37.373 07:21:57 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:37.373 07:21:57 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:37.373 07:21:57 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:37.373 07:21:57 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:37.373 07:21:57 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:37.373 07:21:57 -- target/tls.sh@28 -- # bdevperf_pid=77242 00:12:37.373 07:21:57 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:37.373 07:21:57 -- target/tls.sh@31 -- # waitforlisten 77242 /var/tmp/bdevperf.sock 00:12:37.373 07:21:57 -- common/autotest_common.sh@829 -- # '[' -z 77242 ']' 00:12:37.373 07:21:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:37.373 07:21:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.373 07:21:57 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:37.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:37.373 07:21:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:37.373 07:21:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.373 07:21:57 -- common/autotest_common.sh@10 -- # set +x 00:12:37.373 [2024-11-28 07:21:57.607952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:37.373 [2024-11-28 07:21:57.608100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77242 ] 00:12:37.373 [2024-11-28 07:21:57.754838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.373 [2024-11-28 07:21:57.854381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.373 07:21:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.373 07:21:58 -- common/autotest_common.sh@862 -- # return 0 00:12:37.373 07:21:58 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:37.373 [2024-11-28 07:21:58.830643] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:37.373 TLSTESTn1 00:12:37.373 07:21:58 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:37.373 Running I/O for 10 seconds... 00:12:47.383 00:12:47.383 Latency(us) 00:12:47.383 [2024-11-28T07:22:09.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.383 [2024-11-28T07:22:09.658Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:47.383 Verification LBA range: start 0x0 length 0x2000 00:12:47.383 TLSTESTn1 : 10.02 5439.36 21.25 0.00 0.00 23490.42 4944.99 29074.15 00:12:47.383 [2024-11-28T07:22:09.658Z] =================================================================================================================== 00:12:47.383 [2024-11-28T07:22:09.658Z] Total : 5439.36 21.25 0.00 0.00 23490.42 4944.99 29074.15 00:12:47.383 0 00:12:47.383 07:22:09 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:47.383 07:22:09 -- target/tls.sh@45 -- # killprocess 77242 00:12:47.383 07:22:09 -- common/autotest_common.sh@936 -- # '[' -z 77242 ']' 00:12:47.383 07:22:09 -- common/autotest_common.sh@940 -- # kill -0 77242 00:12:47.383 07:22:09 -- common/autotest_common.sh@941 -- # uname 00:12:47.383 07:22:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:47.383 07:22:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77242 00:12:47.383 07:22:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:47.383 07:22:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:47.383 killing process with pid 77242 00:12:47.383 07:22:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77242' 00:12:47.383 Received shutdown signal, test time was about 10.000000 seconds 00:12:47.383 00:12:47.383 Latency(us) 00:12:47.383 [2024-11-28T07:22:09.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.383 [2024-11-28T07:22:09.658Z] =================================================================================================================== 00:12:47.383 [2024-11-28T07:22:09.658Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:47.383 07:22:09 -- common/autotest_common.sh@955 -- # kill 77242 00:12:47.383 07:22:09 -- common/autotest_common.sh@960 -- # wait 77242 00:12:47.383 07:22:09 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:47.383 07:22:09 -- common/autotest_common.sh@650 -- # local es=0 00:12:47.383 07:22:09 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:47.383 07:22:09 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:47.383 07:22:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:47.383 07:22:09 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:47.383 07:22:09 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:47.383 07:22:09 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:47.383 07:22:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:47.383 07:22:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:47.383 07:22:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:47.383 07:22:09 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:12:47.383 07:22:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:47.383 07:22:09 -- target/tls.sh@28 -- # bdevperf_pid=77376 00:12:47.383 07:22:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:47.383 07:22:09 -- target/tls.sh@31 -- # waitforlisten 77376 /var/tmp/bdevperf.sock 00:12:47.383 07:22:09 -- common/autotest_common.sh@829 -- # '[' -z 77376 ']' 00:12:47.383 07:22:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:47.383 07:22:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:47.383 07:22:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:47.383 07:22:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.383 07:22:09 -- common/autotest_common.sh@10 -- # set +x 00:12:47.383 07:22:09 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:47.383 [2024-11-28 07:22:09.386328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:47.383 [2024-11-28 07:22:09.386430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77376 ] 00:12:47.383 [2024-11-28 07:22:09.525725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.383 [2024-11-28 07:22:09.613604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.352 07:22:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:48.352 07:22:10 -- common/autotest_common.sh@862 -- # return 0 00:12:48.352 07:22:10 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:48.612 [2024-11-28 07:22:10.663168] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:48.612 [2024-11-28 07:22:10.668016] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:48.612 [2024-11-28 07:22:10.668604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa2f90 (107): Transport endpoint is not connected 00:12:48.612 [2024-11-28 07:22:10.669592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa2f90 (9): Bad file descriptor 00:12:48.612 [2024-11-28 07:22:10.670587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:48.612 [2024-11-28 07:22:10.670609] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:48.612 [2024-11-28 07:22:10.670619] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:48.612 request: 00:12:48.612 { 00:12:48.612 "name": "TLSTEST", 00:12:48.612 "trtype": "tcp", 00:12:48.612 "traddr": "10.0.0.2", 00:12:48.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:48.612 "adrfam": "ipv4", 00:12:48.612 "trsvcid": "4420", 00:12:48.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:48.612 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:12:48.612 "method": "bdev_nvme_attach_controller", 00:12:48.612 "req_id": 1 00:12:48.612 } 00:12:48.612 Got JSON-RPC error response 00:12:48.612 response: 00:12:48.612 { 00:12:48.612 "code": -32602, 00:12:48.612 "message": "Invalid parameters" 00:12:48.612 } 00:12:48.612 07:22:10 -- target/tls.sh@36 -- # killprocess 77376 00:12:48.612 07:22:10 -- common/autotest_common.sh@936 -- # '[' -z 77376 ']' 00:12:48.612 07:22:10 -- common/autotest_common.sh@940 -- # kill -0 77376 00:12:48.612 07:22:10 -- common/autotest_common.sh@941 -- # uname 00:12:48.612 07:22:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:48.612 07:22:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77376 00:12:48.612 killing process with pid 77376 00:12:48.612 Received shutdown signal, test time was about 10.000000 seconds 00:12:48.612 00:12:48.612 Latency(us) 00:12:48.612 [2024-11-28T07:22:10.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.612 [2024-11-28T07:22:10.887Z] =================================================================================================================== 00:12:48.612 [2024-11-28T07:22:10.887Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:48.612 07:22:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:48.612 07:22:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:48.612 07:22:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77376' 00:12:48.612 07:22:10 -- common/autotest_common.sh@955 -- # kill 77376 00:12:48.612 07:22:10 -- common/autotest_common.sh@960 -- # wait 77376 00:12:48.872 07:22:10 -- target/tls.sh@37 -- # return 1 00:12:48.872 07:22:10 -- common/autotest_common.sh@653 -- # es=1 00:12:48.872 07:22:10 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.872 07:22:10 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.872 07:22:10 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.872 07:22:10 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:48.872 07:22:10 -- common/autotest_common.sh@650 -- # local es=0 00:12:48.872 07:22:10 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:48.872 07:22:10 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:48.872 07:22:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.872 07:22:10 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:48.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:48.872 07:22:10 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.872 07:22:10 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:48.872 07:22:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:48.872 07:22:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:48.872 07:22:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:48.872 07:22:10 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:48.872 07:22:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:48.872 07:22:10 -- target/tls.sh@28 -- # bdevperf_pid=77403 00:12:48.872 07:22:10 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:48.872 07:22:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:48.872 07:22:10 -- target/tls.sh@31 -- # waitforlisten 77403 /var/tmp/bdevperf.sock 00:12:48.872 07:22:10 -- common/autotest_common.sh@829 -- # '[' -z 77403 ']' 00:12:48.872 07:22:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:48.872 07:22:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.872 07:22:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:48.872 07:22:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.872 07:22:10 -- common/autotest_common.sh@10 -- # set +x 00:12:48.872 [2024-11-28 07:22:10.964766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:48.872 [2024-11-28 07:22:10.965128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77403 ] 00:12:48.872 [2024-11-28 07:22:11.100946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.131 [2024-11-28 07:22:11.188483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.699 07:22:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:49.699 07:22:11 -- common/autotest_common.sh@862 -- # return 0 00:12:49.699 07:22:11 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:49.958 [2024-11-28 07:22:12.166078] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:49.958 [2024-11-28 07:22:12.177444] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:49.958 [2024-11-28 07:22:12.177488] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:49.958 [2024-11-28 07:22:12.177540] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:49.958 [2024-11-28 07:22:12.178499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202cf90 (107): Transport endpoint is not connected 00:12:49.958 [2024-11-28 07:22:12.179489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202cf90 (9): Bad file descriptor 00:12:49.958 [2024-11-28 07:22:12.180484] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:49.958 [2024-11-28 07:22:12.180509] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:49.958 [2024-11-28 07:22:12.180520] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:49.958 request: 00:12:49.958 { 00:12:49.958 "name": "TLSTEST", 00:12:49.958 "trtype": "tcp", 00:12:49.958 "traddr": "10.0.0.2", 00:12:49.958 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:49.958 "adrfam": "ipv4", 00:12:49.958 "trsvcid": "4420", 00:12:49.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:49.958 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:49.958 "method": "bdev_nvme_attach_controller", 00:12:49.958 "req_id": 1 00:12:49.958 } 00:12:49.958 Got JSON-RPC error response 00:12:49.958 response: 00:12:49.958 { 00:12:49.958 "code": -32602, 00:12:49.958 "message": "Invalid parameters" 00:12:49.958 } 00:12:49.958 07:22:12 -- target/tls.sh@36 -- # killprocess 77403 00:12:49.958 07:22:12 -- common/autotest_common.sh@936 -- # '[' -z 77403 ']' 00:12:49.958 07:22:12 -- common/autotest_common.sh@940 -- # kill -0 77403 00:12:49.958 07:22:12 -- common/autotest_common.sh@941 -- # uname 00:12:49.958 07:22:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:49.958 07:22:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77403 00:12:50.217 killing process with pid 77403 00:12:50.217 Received shutdown signal, test time was about 10.000000 seconds 00:12:50.217 00:12:50.217 Latency(us) 00:12:50.217 [2024-11-28T07:22:12.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.217 [2024-11-28T07:22:12.492Z] =================================================================================================================== 00:12:50.217 [2024-11-28T07:22:12.492Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:50.217 07:22:12 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:50.217 07:22:12 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:50.217 07:22:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77403' 00:12:50.217 07:22:12 -- common/autotest_common.sh@955 -- # kill 77403 00:12:50.217 07:22:12 -- common/autotest_common.sh@960 -- # wait 77403 00:12:50.217 07:22:12 -- target/tls.sh@37 -- # return 1 00:12:50.217 07:22:12 -- common/autotest_common.sh@653 -- # es=1 00:12:50.217 07:22:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:50.217 07:22:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:50.217 07:22:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:50.217 07:22:12 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:50.217 07:22:12 -- common/autotest_common.sh@650 -- # local es=0 00:12:50.218 07:22:12 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:50.218 07:22:12 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:50.218 07:22:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.218 07:22:12 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:50.218 07:22:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.218 07:22:12 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:50.218 07:22:12 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:50.218 07:22:12 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:50.218 07:22:12 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:50.218 07:22:12 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:50.218 07:22:12 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:50.218 07:22:12 -- target/tls.sh@28 -- # bdevperf_pid=77431 00:12:50.218 07:22:12 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:50.218 07:22:12 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:50.218 07:22:12 -- target/tls.sh@31 -- # waitforlisten 77431 /var/tmp/bdevperf.sock 00:12:50.218 07:22:12 -- common/autotest_common.sh@829 -- # '[' -z 77431 ']' 00:12:50.218 07:22:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:50.218 07:22:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:50.218 07:22:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:50.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:50.218 07:22:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:50.218 07:22:12 -- common/autotest_common.sh@10 -- # set +x 00:12:50.477 [2024-11-28 07:22:12.493467] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:50.477 [2024-11-28 07:22:12.493575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77431 ] 00:12:50.477 [2024-11-28 07:22:12.633666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.477 [2024-11-28 07:22:12.723821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.416 07:22:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.417 07:22:13 -- common/autotest_common.sh@862 -- # return 0 00:12:51.417 07:22:13 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:51.677 [2024-11-28 07:22:13.753648] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:51.677 [2024-11-28 07:22:13.758453] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:51.677 [2024-11-28 07:22:13.758498] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:51.677 [2024-11-28 07:22:13.758550] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:51.677 [2024-11-28 07:22:13.759148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a95f90 (107): Transport endpoint is not connected 00:12:51.677 [2024-11-28 07:22:13.760136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a95f90 (9): Bad file descriptor 00:12:51.677 [2024-11-28 07:22:13.761132] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:51.677 [2024-11-28 07:22:13.761153] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:51.677 [2024-11-28 07:22:13.761163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:51.677 request: 00:12:51.677 { 00:12:51.677 "name": "TLSTEST", 00:12:51.677 "trtype": "tcp", 00:12:51.677 "traddr": "10.0.0.2", 00:12:51.677 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:51.677 "adrfam": "ipv4", 00:12:51.677 "trsvcid": "4420", 00:12:51.677 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:51.677 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:51.677 "method": "bdev_nvme_attach_controller", 00:12:51.677 "req_id": 1 00:12:51.677 } 00:12:51.677 Got JSON-RPC error response 00:12:51.677 response: 00:12:51.677 { 00:12:51.677 "code": -32602, 00:12:51.677 "message": "Invalid parameters" 00:12:51.677 } 00:12:51.677 07:22:13 -- target/tls.sh@36 -- # killprocess 77431 00:12:51.677 07:22:13 -- common/autotest_common.sh@936 -- # '[' -z 77431 ']' 00:12:51.677 07:22:13 -- common/autotest_common.sh@940 -- # kill -0 77431 00:12:51.677 07:22:13 -- common/autotest_common.sh@941 -- # uname 00:12:51.677 07:22:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:51.677 07:22:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77431 00:12:51.677 killing process with pid 77431 00:12:51.677 Received shutdown signal, test time was about 10.000000 seconds 00:12:51.677 00:12:51.677 Latency(us) 00:12:51.677 [2024-11-28T07:22:13.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.677 [2024-11-28T07:22:13.952Z] =================================================================================================================== 00:12:51.677 [2024-11-28T07:22:13.952Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:51.677 07:22:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:51.677 07:22:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:51.677 07:22:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77431' 00:12:51.677 07:22:13 -- common/autotest_common.sh@955 -- # kill 77431 00:12:51.677 07:22:13 -- common/autotest_common.sh@960 -- # wait 77431 00:12:51.937 07:22:14 -- target/tls.sh@37 -- # return 1 00:12:51.937 07:22:14 -- common/autotest_common.sh@653 -- # es=1 00:12:51.937 07:22:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:51.937 07:22:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:51.937 07:22:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:51.937 07:22:14 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:51.937 07:22:14 -- common/autotest_common.sh@650 -- # local es=0 00:12:51.937 07:22:14 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:51.937 07:22:14 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:51.937 07:22:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.937 07:22:14 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:51.937 07:22:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:51.937 07:22:14 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:51.937 07:22:14 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:51.937 07:22:14 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:51.938 07:22:14 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:51.938 07:22:14 -- target/tls.sh@23 -- # psk= 00:12:51.938 07:22:14 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:51.938 07:22:14 -- target/tls.sh@28 -- # bdevperf_pid=77458 00:12:51.938 07:22:14 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:51.938 07:22:14 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:51.938 07:22:14 -- target/tls.sh@31 -- # waitforlisten 77458 /var/tmp/bdevperf.sock 00:12:51.938 07:22:14 -- common/autotest_common.sh@829 -- # '[' -z 77458 ']' 00:12:51.938 07:22:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:51.938 07:22:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.938 07:22:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:51.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:51.938 07:22:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.938 07:22:14 -- common/autotest_common.sh@10 -- # set +x 00:12:51.938 [2024-11-28 07:22:14.068491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:51.938 [2024-11-28 07:22:14.068596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77458 ] 00:12:51.938 [2024-11-28 07:22:14.206046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.197 [2024-11-28 07:22:14.295946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.134 07:22:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.134 07:22:15 -- common/autotest_common.sh@862 -- # return 0 00:12:53.134 07:22:15 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:53.134 [2024-11-28 07:22:15.298436] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:53.134 [2024-11-28 07:22:15.299952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b1c20 (9): Bad file descriptor 00:12:53.134 [2024-11-28 07:22:15.300947] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:53.134 [2024-11-28 07:22:15.300971] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:53.134 [2024-11-28 07:22:15.300982] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:53.134 request: 00:12:53.134 { 00:12:53.134 "name": "TLSTEST", 00:12:53.134 "trtype": "tcp", 00:12:53.134 "traddr": "10.0.0.2", 00:12:53.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:53.134 "adrfam": "ipv4", 00:12:53.134 "trsvcid": "4420", 00:12:53.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.134 "method": "bdev_nvme_attach_controller", 00:12:53.134 "req_id": 1 00:12:53.134 } 00:12:53.134 Got JSON-RPC error response 00:12:53.134 response: 00:12:53.134 { 00:12:53.134 "code": -32602, 00:12:53.134 "message": "Invalid parameters" 00:12:53.134 } 00:12:53.134 07:22:15 -- target/tls.sh@36 -- # killprocess 77458 00:12:53.134 07:22:15 -- common/autotest_common.sh@936 -- # '[' -z 77458 ']' 00:12:53.134 07:22:15 -- common/autotest_common.sh@940 -- # kill -0 77458 00:12:53.134 07:22:15 -- common/autotest_common.sh@941 -- # uname 00:12:53.134 07:22:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:53.134 07:22:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77458 00:12:53.134 killing process with pid 77458 00:12:53.134 Received shutdown signal, test time was about 10.000000 seconds 00:12:53.134 00:12:53.134 Latency(us) 00:12:53.134 [2024-11-28T07:22:15.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.134 [2024-11-28T07:22:15.409Z] =================================================================================================================== 00:12:53.134 [2024-11-28T07:22:15.409Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:53.134 07:22:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:53.134 07:22:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:53.134 07:22:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77458' 00:12:53.134 07:22:15 -- common/autotest_common.sh@955 -- # kill 77458 00:12:53.134 07:22:15 -- common/autotest_common.sh@960 -- # wait 77458 00:12:53.393 07:22:15 -- target/tls.sh@37 -- # return 1 00:12:53.393 07:22:15 -- common/autotest_common.sh@653 -- # es=1 00:12:53.393 07:22:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:53.393 07:22:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:53.393 07:22:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:53.393 07:22:15 -- target/tls.sh@167 -- # killprocess 76994 00:12:53.393 07:22:15 -- common/autotest_common.sh@936 -- # '[' -z 76994 ']' 00:12:53.393 07:22:15 -- common/autotest_common.sh@940 -- # kill -0 76994 00:12:53.393 07:22:15 -- common/autotest_common.sh@941 -- # uname 00:12:53.393 07:22:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:53.393 07:22:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76994 00:12:53.393 killing process with pid 76994 00:12:53.393 07:22:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:53.393 07:22:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:53.393 07:22:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76994' 00:12:53.393 07:22:15 -- common/autotest_common.sh@955 -- # kill 76994 00:12:53.393 07:22:15 -- common/autotest_common.sh@960 -- # wait 76994 00:12:53.653 07:22:15 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:12:53.653 07:22:15 -- target/tls.sh@49 -- # local key hash crc 00:12:53.653 07:22:15 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:53.653 07:22:15 -- target/tls.sh@51 -- # hash=02 00:12:53.653 07:22:15 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:12:53.653 07:22:15 -- target/tls.sh@52 -- # tail -c8 00:12:53.653 07:22:15 -- target/tls.sh@52 -- # gzip -1 -c 00:12:53.653 07:22:15 -- target/tls.sh@52 -- # head -c 4 00:12:53.653 07:22:15 -- target/tls.sh@52 -- # crc='�e�'\''' 00:12:53.653 07:22:15 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:53.653 07:22:15 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:12:53.653 07:22:15 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:53.653 07:22:15 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:53.653 07:22:15 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:53.653 07:22:15 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:53.653 07:22:15 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:53.653 07:22:15 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:12:53.653 07:22:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:53.653 07:22:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:53.653 07:22:15 -- common/autotest_common.sh@10 -- # set +x 00:12:53.653 07:22:15 -- nvmf/common.sh@469 -- # nvmfpid=77501 00:12:53.653 07:22:15 -- nvmf/common.sh@470 -- # waitforlisten 77501 00:12:53.653 07:22:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:53.653 07:22:15 -- common/autotest_common.sh@829 -- # '[' -z 77501 ']' 00:12:53.653 07:22:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.653 07:22:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:53.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.653 07:22:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.653 07:22:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:53.653 07:22:15 -- common/autotest_common.sh@10 -- # set +x 00:12:53.913 [2024-11-28 07:22:15.945605] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:53.913 [2024-11-28 07:22:15.945748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.913 [2024-11-28 07:22:16.085106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.913 [2024-11-28 07:22:16.172715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:53.913 [2024-11-28 07:22:16.172874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.913 [2024-11-28 07:22:16.172890] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.913 [2024-11-28 07:22:16.172900] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.913 [2024-11-28 07:22:16.172937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.851 07:22:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:54.851 07:22:16 -- common/autotest_common.sh@862 -- # return 0 00:12:54.851 07:22:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:54.851 07:22:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:54.851 07:22:16 -- common/autotest_common.sh@10 -- # set +x 00:12:54.851 07:22:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.851 07:22:16 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:54.851 07:22:16 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:54.851 07:22:16 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:55.111 [2024-11-28 07:22:17.168997] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.111 07:22:17 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:55.371 07:22:17 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:55.631 [2024-11-28 07:22:17.685146] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:55.631 [2024-11-28 07:22:17.685488] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.631 07:22:17 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:55.890 malloc0 00:12:55.890 07:22:17 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:56.150 07:22:18 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:56.410 07:22:18 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:56.410 07:22:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:56.410 07:22:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:56.410 07:22:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:56.410 07:22:18 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:56.410 07:22:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:56.410 07:22:18 -- target/tls.sh@28 -- # bdevperf_pid=77561 00:12:56.410 07:22:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:56.410 07:22:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:56.410 07:22:18 -- target/tls.sh@31 -- # waitforlisten 77561 /var/tmp/bdevperf.sock 00:12:56.410 07:22:18 -- common/autotest_common.sh@829 -- # '[' -z 77561 ']' 00:12:56.410 07:22:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:56.410 07:22:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:56.410 07:22:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:56.410 07:22:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.410 07:22:18 -- common/autotest_common.sh@10 -- # set +x 00:12:56.410 [2024-11-28 07:22:18.481149] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:56.410 [2024-11-28 07:22:18.481250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77561 ] 00:12:56.410 [2024-11-28 07:22:18.618010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.669 [2024-11-28 07:22:18.705663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.238 07:22:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.238 07:22:19 -- common/autotest_common.sh@862 -- # return 0 00:12:57.238 07:22:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:57.497 [2024-11-28 07:22:19.675324] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:57.497 TLSTESTn1 00:12:57.497 07:22:19 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:57.756 Running I/O for 10 seconds... 00:13:07.741 00:13:07.741 Latency(us) 00:13:07.741 [2024-11-28T07:22:30.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.741 [2024-11-28T07:22:30.016Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:07.741 Verification LBA range: start 0x0 length 0x2000 00:13:07.741 TLSTESTn1 : 10.01 5529.65 21.60 0.00 0.00 23112.53 4647.10 28120.90 00:13:07.741 [2024-11-28T07:22:30.016Z] =================================================================================================================== 00:13:07.741 [2024-11-28T07:22:30.016Z] Total : 5529.65 21.60 0.00 0.00 23112.53 4647.10 28120.90 00:13:07.741 0 00:13:07.741 07:22:29 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:07.741 07:22:29 -- target/tls.sh@45 -- # killprocess 77561 00:13:07.741 07:22:29 -- common/autotest_common.sh@936 -- # '[' -z 77561 ']' 00:13:07.741 07:22:29 -- common/autotest_common.sh@940 -- # kill -0 77561 00:13:07.741 07:22:29 -- common/autotest_common.sh@941 -- # uname 00:13:07.741 07:22:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:07.741 07:22:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77561 00:13:07.741 07:22:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:07.741 07:22:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:07.741 07:22:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77561' 00:13:07.741 killing process with pid 77561 00:13:07.741 07:22:29 -- common/autotest_common.sh@955 -- # kill 77561 00:13:07.741 Received shutdown signal, test time was about 10.000000 seconds 00:13:07.741 00:13:07.741 Latency(us) 00:13:07.741 [2024-11-28T07:22:30.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.741 [2024-11-28T07:22:30.016Z] =================================================================================================================== 00:13:07.741 [2024-11-28T07:22:30.016Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:07.741 07:22:29 -- common/autotest_common.sh@960 -- # wait 77561 00:13:08.001 07:22:30 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:08.001 07:22:30 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:08.001 07:22:30 -- common/autotest_common.sh@650 -- # local es=0 00:13:08.001 07:22:30 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:08.001 07:22:30 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:08.001 07:22:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.001 07:22:30 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:08.001 07:22:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:08.001 07:22:30 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:08.001 07:22:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:08.001 07:22:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:08.001 07:22:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:08.001 07:22:30 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:13:08.001 07:22:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:08.001 07:22:30 -- target/tls.sh@28 -- # bdevperf_pid=77690 00:13:08.001 07:22:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:08.001 07:22:30 -- target/tls.sh@31 -- # waitforlisten 77690 /var/tmp/bdevperf.sock 00:13:08.001 07:22:30 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:08.001 07:22:30 -- common/autotest_common.sh@829 -- # '[' -z 77690 ']' 00:13:08.001 07:22:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:08.001 07:22:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.001 07:22:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:08.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:08.001 07:22:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.001 07:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:08.001 [2024-11-28 07:22:30.241135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:08.001 [2024-11-28 07:22:30.241580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77690 ] 00:13:08.260 [2024-11-28 07:22:30.389184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.260 [2024-11-28 07:22:30.481784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.196 07:22:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.196 07:22:31 -- common/autotest_common.sh@862 -- # return 0 00:13:09.196 07:22:31 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:09.459 [2024-11-28 07:22:31.481135] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:09.459 [2024-11-28 07:22:31.481459] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:09.459 request: 00:13:09.459 { 00:13:09.459 "name": "TLSTEST", 00:13:09.459 "trtype": "tcp", 00:13:09.459 "traddr": "10.0.0.2", 00:13:09.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:09.459 "adrfam": "ipv4", 00:13:09.459 "trsvcid": "4420", 00:13:09.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.459 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:13:09.459 "method": "bdev_nvme_attach_controller", 00:13:09.459 "req_id": 1 00:13:09.459 } 00:13:09.459 Got JSON-RPC error response 00:13:09.459 response: 00:13:09.459 { 00:13:09.459 "code": -22, 00:13:09.459 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:13:09.459 } 00:13:09.459 07:22:31 -- target/tls.sh@36 -- # killprocess 77690 00:13:09.459 07:22:31 -- common/autotest_common.sh@936 -- # '[' -z 77690 ']' 00:13:09.459 07:22:31 -- common/autotest_common.sh@940 -- # kill -0 77690 00:13:09.459 07:22:31 -- common/autotest_common.sh@941 -- # uname 00:13:09.459 07:22:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:09.459 07:22:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77690 00:13:09.459 killing process with pid 77690 00:13:09.459 Received shutdown signal, test time was about 10.000000 seconds 00:13:09.459 00:13:09.459 Latency(us) 00:13:09.459 [2024-11-28T07:22:31.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.459 [2024-11-28T07:22:31.734Z] =================================================================================================================== 00:13:09.459 [2024-11-28T07:22:31.734Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:09.459 07:22:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:09.459 07:22:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:09.459 07:22:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77690' 00:13:09.459 07:22:31 -- common/autotest_common.sh@955 -- # kill 77690 00:13:09.459 07:22:31 -- common/autotest_common.sh@960 -- # wait 77690 00:13:09.718 07:22:31 -- target/tls.sh@37 -- # return 1 00:13:09.718 07:22:31 -- common/autotest_common.sh@653 -- # es=1 00:13:09.718 07:22:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:09.718 07:22:31 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:09.718 07:22:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:09.718 07:22:31 -- target/tls.sh@183 -- # killprocess 77501 00:13:09.718 07:22:31 -- common/autotest_common.sh@936 -- # '[' -z 77501 ']' 00:13:09.718 07:22:31 -- common/autotest_common.sh@940 -- # kill -0 77501 00:13:09.718 07:22:31 -- common/autotest_common.sh@941 -- # uname 00:13:09.718 07:22:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:09.718 07:22:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77501 00:13:09.718 killing process with pid 77501 00:13:09.718 07:22:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:09.718 07:22:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:09.718 07:22:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77501' 00:13:09.718 07:22:31 -- common/autotest_common.sh@955 -- # kill 77501 00:13:09.718 07:22:31 -- common/autotest_common.sh@960 -- # wait 77501 00:13:09.977 07:22:32 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:13:09.977 07:22:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:09.977 07:22:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:09.977 07:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:09.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.977 07:22:32 -- nvmf/common.sh@469 -- # nvmfpid=77728 00:13:09.977 07:22:32 -- nvmf/common.sh@470 -- # waitforlisten 77728 00:13:09.977 07:22:32 -- common/autotest_common.sh@829 -- # '[' -z 77728 ']' 00:13:09.977 07:22:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:09.977 07:22:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.977 07:22:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.977 07:22:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.977 07:22:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.977 07:22:32 -- common/autotest_common.sh@10 -- # set +x 00:13:09.977 [2024-11-28 07:22:32.076843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:09.977 [2024-11-28 07:22:32.077273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.977 [2024-11-28 07:22:32.215835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.236 [2024-11-28 07:22:32.307907] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:10.236 [2024-11-28 07:22:32.308058] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.236 [2024-11-28 07:22:32.308071] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.236 [2024-11-28 07:22:32.308081] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.236 [2024-11-28 07:22:32.308107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.173 07:22:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.173 07:22:33 -- common/autotest_common.sh@862 -- # return 0 00:13:11.173 07:22:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:11.173 07:22:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:11.173 07:22:33 -- common/autotest_common.sh@10 -- # set +x 00:13:11.173 07:22:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.173 07:22:33 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:11.173 07:22:33 -- common/autotest_common.sh@650 -- # local es=0 00:13:11.173 07:22:33 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:11.173 07:22:33 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:11.173 07:22:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.173 07:22:33 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:11.173 07:22:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.173 07:22:33 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:11.173 07:22:33 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:11.173 07:22:33 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:11.173 [2024-11-28 07:22:33.375611] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.173 07:22:33 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:11.433 07:22:33 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:12.001 [2024-11-28 07:22:33.967737] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:12.001 [2024-11-28 07:22:33.968000] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.001 07:22:33 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:12.001 malloc0 00:13:12.001 07:22:34 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:12.260 07:22:34 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:12.520 [2024-11-28 07:22:34.746822] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:13:12.520 [2024-11-28 07:22:34.747915] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:13:12.520 [2024-11-28 07:22:34.747947] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:13:12.520 request: 00:13:12.520 { 00:13:12.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:12.520 "host": "nqn.2016-06.io.spdk:host1", 00:13:12.520 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:13:12.520 "method": "nvmf_subsystem_add_host", 00:13:12.520 "req_id": 1 00:13:12.520 } 00:13:12.520 Got JSON-RPC error response 00:13:12.520 response: 00:13:12.520 { 00:13:12.520 "code": -32603, 00:13:12.520 "message": "Internal error" 00:13:12.520 } 00:13:12.520 07:22:34 -- common/autotest_common.sh@653 -- # es=1 00:13:12.520 07:22:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:12.520 07:22:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:12.520 07:22:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:12.520 07:22:34 -- target/tls.sh@189 -- # killprocess 77728 00:13:12.520 07:22:34 -- common/autotest_common.sh@936 -- # '[' -z 77728 ']' 00:13:12.520 07:22:34 -- common/autotest_common.sh@940 -- # kill -0 77728 00:13:12.520 07:22:34 -- common/autotest_common.sh@941 -- # uname 00:13:12.520 07:22:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:12.520 07:22:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77728 00:13:12.780 killing process with pid 77728 00:13:12.780 07:22:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:12.780 07:22:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:12.780 07:22:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77728' 00:13:12.780 07:22:34 -- common/autotest_common.sh@955 -- # kill 77728 00:13:12.780 07:22:34 -- common/autotest_common.sh@960 -- # wait 77728 00:13:12.780 07:22:35 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:12.780 07:22:35 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:13:12.780 07:22:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:12.780 07:22:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:12.780 07:22:35 -- common/autotest_common.sh@10 -- # set +x 00:13:12.780 07:22:35 -- nvmf/common.sh@469 -- # nvmfpid=77796 00:13:12.780 07:22:35 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:12.780 07:22:35 -- nvmf/common.sh@470 -- # waitforlisten 77796 00:13:12.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.780 07:22:35 -- common/autotest_common.sh@829 -- # '[' -z 77796 ']' 00:13:12.780 07:22:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.780 07:22:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.780 07:22:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.780 07:22:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.780 07:22:35 -- common/autotest_common.sh@10 -- # set +x 00:13:13.038 [2024-11-28 07:22:35.088721] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:13.038 [2024-11-28 07:22:35.089129] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.038 [2024-11-28 07:22:35.226446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.297 [2024-11-28 07:22:35.317567] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:13.297 [2024-11-28 07:22:35.318014] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.297 [2024-11-28 07:22:35.318036] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.297 [2024-11-28 07:22:35.318048] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.297 [2024-11-28 07:22:35.318076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.866 07:22:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.866 07:22:36 -- common/autotest_common.sh@862 -- # return 0 00:13:13.866 07:22:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:13.866 07:22:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:13.866 07:22:36 -- common/autotest_common.sh@10 -- # set +x 00:13:13.866 07:22:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.866 07:22:36 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:13.866 07:22:36 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:13.866 07:22:36 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:14.125 [2024-11-28 07:22:36.321146] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.125 07:22:36 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:14.384 07:22:36 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:14.644 [2024-11-28 07:22:36.789251] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:14.644 [2024-11-28 07:22:36.789535] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.644 07:22:36 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:14.903 malloc0 00:13:14.903 07:22:37 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:15.162 07:22:37 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:15.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:15.422 07:22:37 -- target/tls.sh@197 -- # bdevperf_pid=77845 00:13:15.422 07:22:37 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:15.422 07:22:37 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:15.422 07:22:37 -- target/tls.sh@200 -- # waitforlisten 77845 /var/tmp/bdevperf.sock 00:13:15.422 07:22:37 -- common/autotest_common.sh@829 -- # '[' -z 77845 ']' 00:13:15.422 07:22:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:15.422 07:22:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.422 07:22:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:15.422 07:22:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.422 07:22:37 -- common/autotest_common.sh@10 -- # set +x 00:13:15.422 [2024-11-28 07:22:37.617802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:15.422 [2024-11-28 07:22:37.619355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77845 ] 00:13:15.682 [2024-11-28 07:22:37.761535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.682 [2024-11-28 07:22:37.859729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.620 07:22:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.620 07:22:38 -- common/autotest_common.sh@862 -- # return 0 00:13:16.620 07:22:38 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:16.620 [2024-11-28 07:22:38.827288] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:16.878 TLSTESTn1 00:13:16.878 07:22:38 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:17.137 07:22:39 -- target/tls.sh@205 -- # tgtconf='{ 00:13:17.137 "subsystems": [ 00:13:17.137 { 00:13:17.137 "subsystem": "iobuf", 00:13:17.137 "config": [ 00:13:17.137 { 00:13:17.137 "method": "iobuf_set_options", 00:13:17.137 "params": { 00:13:17.137 "small_pool_count": 8192, 00:13:17.137 "large_pool_count": 1024, 00:13:17.138 "small_bufsize": 8192, 00:13:17.138 "large_bufsize": 135168 00:13:17.138 } 00:13:17.138 } 00:13:17.138 ] 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "subsystem": "sock", 00:13:17.138 "config": [ 00:13:17.138 { 00:13:17.138 "method": "sock_impl_set_options", 00:13:17.138 "params": { 00:13:17.138 "impl_name": "uring", 00:13:17.138 "recv_buf_size": 2097152, 00:13:17.138 "send_buf_size": 2097152, 00:13:17.138 "enable_recv_pipe": true, 00:13:17.138 "enable_quickack": false, 00:13:17.138 "enable_placement_id": 0, 00:13:17.138 "enable_zerocopy_send_server": false, 00:13:17.138 "enable_zerocopy_send_client": false, 00:13:17.138 "zerocopy_threshold": 0, 00:13:17.138 "tls_version": 0, 00:13:17.138 "enable_ktls": false 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "sock_impl_set_options", 00:13:17.138 "params": { 00:13:17.138 "impl_name": "posix", 00:13:17.138 "recv_buf_size": 2097152, 00:13:17.138 "send_buf_size": 2097152, 00:13:17.138 "enable_recv_pipe": true, 00:13:17.138 "enable_quickack": false, 00:13:17.138 "enable_placement_id": 0, 00:13:17.138 "enable_zerocopy_send_server": true, 00:13:17.138 "enable_zerocopy_send_client": false, 00:13:17.138 "zerocopy_threshold": 0, 00:13:17.138 "tls_version": 0, 00:13:17.138 "enable_ktls": false 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "sock_impl_set_options", 00:13:17.138 "params": { 00:13:17.138 "impl_name": "ssl", 00:13:17.138 "recv_buf_size": 4096, 00:13:17.138 "send_buf_size": 4096, 00:13:17.138 "enable_recv_pipe": true, 00:13:17.138 "enable_quickack": false, 00:13:17.138 "enable_placement_id": 0, 00:13:17.138 "enable_zerocopy_send_server": true, 00:13:17.138 "enable_zerocopy_send_client": false, 00:13:17.138 "zerocopy_threshold": 0, 00:13:17.138 "tls_version": 0, 00:13:17.138 "enable_ktls": false 00:13:17.138 } 00:13:17.138 } 00:13:17.138 ] 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "subsystem": "vmd", 00:13:17.138 "config": [] 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "subsystem": "accel", 00:13:17.138 "config": [ 00:13:17.138 { 00:13:17.138 "method": "accel_set_options", 00:13:17.138 "params": { 00:13:17.138 "small_cache_size": 128, 00:13:17.138 "large_cache_size": 16, 00:13:17.138 "task_count": 2048, 00:13:17.138 "sequence_count": 2048, 00:13:17.138 "buf_count": 2048 00:13:17.138 } 00:13:17.138 } 00:13:17.138 ] 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "subsystem": "bdev", 00:13:17.138 "config": [ 00:13:17.138 { 00:13:17.138 "method": "bdev_set_options", 00:13:17.138 "params": { 00:13:17.138 "bdev_io_pool_size": 65535, 00:13:17.138 "bdev_io_cache_size": 256, 00:13:17.138 "bdev_auto_examine": true, 00:13:17.138 "iobuf_small_cache_size": 128, 00:13:17.138 "iobuf_large_cache_size": 16 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "bdev_raid_set_options", 00:13:17.138 "params": { 00:13:17.138 "process_window_size_kb": 1024 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "bdev_iscsi_set_options", 00:13:17.138 "params": { 00:13:17.138 "timeout_sec": 30 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "bdev_nvme_set_options", 00:13:17.138 "params": { 00:13:17.138 "action_on_timeout": "none", 00:13:17.138 "timeout_us": 0, 00:13:17.138 "timeout_admin_us": 0, 00:13:17.138 "keep_alive_timeout_ms": 10000, 00:13:17.138 "transport_retry_count": 4, 00:13:17.138 "arbitration_burst": 0, 00:13:17.138 "low_priority_weight": 0, 00:13:17.138 "medium_priority_weight": 0, 00:13:17.138 "high_priority_weight": 0, 00:13:17.138 "nvme_adminq_poll_period_us": 10000, 00:13:17.138 "nvme_ioq_poll_period_us": 0, 00:13:17.138 "io_queue_requests": 0, 00:13:17.138 "delay_cmd_submit": true, 00:13:17.138 "bdev_retry_count": 3, 00:13:17.138 "transport_ack_timeout": 0, 00:13:17.138 "ctrlr_loss_timeout_sec": 0, 00:13:17.138 "reconnect_delay_sec": 0, 00:13:17.138 "fast_io_fail_timeout_sec": 0, 00:13:17.138 "generate_uuids": false, 00:13:17.138 "transport_tos": 0, 00:13:17.138 "io_path_stat": false, 00:13:17.138 "allow_accel_sequence": false 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "bdev_nvme_set_hotplug", 00:13:17.138 "params": { 00:13:17.138 "period_us": 100000, 00:13:17.138 "enable": false 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "bdev_malloc_create", 00:13:17.138 "params": { 00:13:17.138 "name": "malloc0", 00:13:17.138 "num_blocks": 8192, 00:13:17.138 "block_size": 4096, 00:13:17.138 "physical_block_size": 4096, 00:13:17.138 "uuid": "df863cd3-d5f9-4c00-b306-031ad8e9966f", 00:13:17.138 "optimal_io_boundary": 0 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "bdev_wait_for_examine" 00:13:17.138 } 00:13:17.138 ] 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "subsystem": "nbd", 00:13:17.138 "config": [] 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "subsystem": "scheduler", 00:13:17.138 "config": [ 00:13:17.138 { 00:13:17.138 "method": "framework_set_scheduler", 00:13:17.138 "params": { 00:13:17.138 "name": "static" 00:13:17.138 } 00:13:17.138 } 00:13:17.138 ] 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "subsystem": "nvmf", 00:13:17.138 "config": [ 00:13:17.138 { 00:13:17.138 "method": "nvmf_set_config", 00:13:17.138 "params": { 00:13:17.138 "discovery_filter": "match_any", 00:13:17.138 "admin_cmd_passthru": { 00:13:17.138 "identify_ctrlr": false 00:13:17.138 } 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "nvmf_set_max_subsystems", 00:13:17.138 "params": { 00:13:17.138 "max_subsystems": 1024 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "nvmf_set_crdt", 00:13:17.138 "params": { 00:13:17.138 "crdt1": 0, 00:13:17.138 "crdt2": 0, 00:13:17.138 "crdt3": 0 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "nvmf_create_transport", 00:13:17.138 "params": { 00:13:17.138 "trtype": "TCP", 00:13:17.138 "max_queue_depth": 128, 00:13:17.138 "max_io_qpairs_per_ctrlr": 127, 00:13:17.138 "in_capsule_data_size": 4096, 00:13:17.138 "max_io_size": 131072, 00:13:17.138 "io_unit_size": 131072, 00:13:17.138 "max_aq_depth": 128, 00:13:17.138 "num_shared_buffers": 511, 00:13:17.138 "buf_cache_size": 4294967295, 00:13:17.138 "dif_insert_or_strip": false, 00:13:17.138 "zcopy": false, 00:13:17.138 "c2h_success": false, 00:13:17.138 "sock_priority": 0, 00:13:17.138 "abort_timeout_sec": 1 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "nvmf_create_subsystem", 00:13:17.138 "params": { 00:13:17.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.138 "allow_any_host": false, 00:13:17.138 "serial_number": "SPDK00000000000001", 00:13:17.138 "model_number": "SPDK bdev Controller", 00:13:17.138 "max_namespaces": 10, 00:13:17.138 "min_cntlid": 1, 00:13:17.138 "max_cntlid": 65519, 00:13:17.138 "ana_reporting": false 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "nvmf_subsystem_add_host", 00:13:17.138 "params": { 00:13:17.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.138 "host": "nqn.2016-06.io.spdk:host1", 00:13:17.138 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:13:17.138 } 00:13:17.138 }, 00:13:17.138 { 00:13:17.138 "method": "nvmf_subsystem_add_ns", 00:13:17.139 "params": { 00:13:17.139 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.139 "namespace": { 00:13:17.139 "nsid": 1, 00:13:17.139 "bdev_name": "malloc0", 00:13:17.139 "nguid": "DF863CD3D5F94C00B306031AD8E9966F", 00:13:17.139 "uuid": "df863cd3-d5f9-4c00-b306-031ad8e9966f" 00:13:17.139 } 00:13:17.139 } 00:13:17.139 }, 00:13:17.139 { 00:13:17.139 "method": "nvmf_subsystem_add_listener", 00:13:17.139 "params": { 00:13:17.139 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.139 "listen_address": { 00:13:17.139 "trtype": "TCP", 00:13:17.139 "adrfam": "IPv4", 00:13:17.139 "traddr": "10.0.0.2", 00:13:17.139 "trsvcid": "4420" 00:13:17.139 }, 00:13:17.139 "secure_channel": true 00:13:17.139 } 00:13:17.139 } 00:13:17.139 ] 00:13:17.139 } 00:13:17.139 ] 00:13:17.139 }' 00:13:17.139 07:22:39 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:17.398 07:22:39 -- target/tls.sh@206 -- # bdevperfconf='{ 00:13:17.398 "subsystems": [ 00:13:17.398 { 00:13:17.398 "subsystem": "iobuf", 00:13:17.398 "config": [ 00:13:17.398 { 00:13:17.398 "method": "iobuf_set_options", 00:13:17.398 "params": { 00:13:17.398 "small_pool_count": 8192, 00:13:17.398 "large_pool_count": 1024, 00:13:17.398 "small_bufsize": 8192, 00:13:17.398 "large_bufsize": 135168 00:13:17.398 } 00:13:17.398 } 00:13:17.398 ] 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "subsystem": "sock", 00:13:17.398 "config": [ 00:13:17.398 { 00:13:17.398 "method": "sock_impl_set_options", 00:13:17.398 "params": { 00:13:17.398 "impl_name": "uring", 00:13:17.398 "recv_buf_size": 2097152, 00:13:17.398 "send_buf_size": 2097152, 00:13:17.398 "enable_recv_pipe": true, 00:13:17.398 "enable_quickack": false, 00:13:17.398 "enable_placement_id": 0, 00:13:17.398 "enable_zerocopy_send_server": false, 00:13:17.398 "enable_zerocopy_send_client": false, 00:13:17.398 "zerocopy_threshold": 0, 00:13:17.398 "tls_version": 0, 00:13:17.398 "enable_ktls": false 00:13:17.398 } 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "method": "sock_impl_set_options", 00:13:17.398 "params": { 00:13:17.398 "impl_name": "posix", 00:13:17.398 "recv_buf_size": 2097152, 00:13:17.398 "send_buf_size": 2097152, 00:13:17.398 "enable_recv_pipe": true, 00:13:17.398 "enable_quickack": false, 00:13:17.398 "enable_placement_id": 0, 00:13:17.398 "enable_zerocopy_send_server": true, 00:13:17.398 "enable_zerocopy_send_client": false, 00:13:17.398 "zerocopy_threshold": 0, 00:13:17.398 "tls_version": 0, 00:13:17.398 "enable_ktls": false 00:13:17.398 } 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "method": "sock_impl_set_options", 00:13:17.398 "params": { 00:13:17.398 "impl_name": "ssl", 00:13:17.398 "recv_buf_size": 4096, 00:13:17.398 "send_buf_size": 4096, 00:13:17.398 "enable_recv_pipe": true, 00:13:17.398 "enable_quickack": false, 00:13:17.398 "enable_placement_id": 0, 00:13:17.398 "enable_zerocopy_send_server": true, 00:13:17.398 "enable_zerocopy_send_client": false, 00:13:17.398 "zerocopy_threshold": 0, 00:13:17.398 "tls_version": 0, 00:13:17.398 "enable_ktls": false 00:13:17.398 } 00:13:17.398 } 00:13:17.398 ] 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "subsystem": "vmd", 00:13:17.398 "config": [] 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "subsystem": "accel", 00:13:17.398 "config": [ 00:13:17.398 { 00:13:17.398 "method": "accel_set_options", 00:13:17.398 "params": { 00:13:17.398 "small_cache_size": 128, 00:13:17.398 "large_cache_size": 16, 00:13:17.398 "task_count": 2048, 00:13:17.398 "sequence_count": 2048, 00:13:17.398 "buf_count": 2048 00:13:17.398 } 00:13:17.398 } 00:13:17.398 ] 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "subsystem": "bdev", 00:13:17.398 "config": [ 00:13:17.398 { 00:13:17.398 "method": "bdev_set_options", 00:13:17.398 "params": { 00:13:17.398 "bdev_io_pool_size": 65535, 00:13:17.398 "bdev_io_cache_size": 256, 00:13:17.398 "bdev_auto_examine": true, 00:13:17.398 "iobuf_small_cache_size": 128, 00:13:17.398 "iobuf_large_cache_size": 16 00:13:17.398 } 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "method": "bdev_raid_set_options", 00:13:17.398 "params": { 00:13:17.398 "process_window_size_kb": 1024 00:13:17.398 } 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "method": "bdev_iscsi_set_options", 00:13:17.398 "params": { 00:13:17.398 "timeout_sec": 30 00:13:17.398 } 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "method": "bdev_nvme_set_options", 00:13:17.398 "params": { 00:13:17.398 "action_on_timeout": "none", 00:13:17.398 "timeout_us": 0, 00:13:17.398 "timeout_admin_us": 0, 00:13:17.398 "keep_alive_timeout_ms": 10000, 00:13:17.398 "transport_retry_count": 4, 00:13:17.398 "arbitration_burst": 0, 00:13:17.398 "low_priority_weight": 0, 00:13:17.398 "medium_priority_weight": 0, 00:13:17.398 "high_priority_weight": 0, 00:13:17.398 "nvme_adminq_poll_period_us": 10000, 00:13:17.398 "nvme_ioq_poll_period_us": 0, 00:13:17.398 "io_queue_requests": 512, 00:13:17.398 "delay_cmd_submit": true, 00:13:17.398 "bdev_retry_count": 3, 00:13:17.398 "transport_ack_timeout": 0, 00:13:17.398 "ctrlr_loss_timeout_sec": 0, 00:13:17.398 "reconnect_delay_sec": 0, 00:13:17.398 "fast_io_fail_timeout_sec": 0, 00:13:17.398 "generate_uuids": false, 00:13:17.398 "transport_tos": 0, 00:13:17.398 "io_path_stat": false, 00:13:17.398 "allow_accel_sequence": false 00:13:17.398 } 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "method": "bdev_nvme_attach_controller", 00:13:17.398 "params": { 00:13:17.398 "name": "TLSTEST", 00:13:17.398 "trtype": "TCP", 00:13:17.398 "adrfam": "IPv4", 00:13:17.398 "traddr": "10.0.0.2", 00:13:17.398 "trsvcid": "4420", 00:13:17.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.398 "prchk_reftag": false, 00:13:17.398 "prchk_guard": false, 00:13:17.398 "ctrlr_loss_timeout_sec": 0, 00:13:17.398 "reconnect_delay_sec": 0, 00:13:17.398 "fast_io_fail_timeout_sec": 0, 00:13:17.398 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:13:17.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:17.398 "hdgst": false, 00:13:17.398 "ddgst": false 00:13:17.398 } 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "method": "bdev_nvme_set_hotplug", 00:13:17.398 "params": { 00:13:17.398 "period_us": 100000, 00:13:17.398 "enable": false 00:13:17.398 } 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "method": "bdev_wait_for_examine" 00:13:17.398 } 00:13:17.398 ] 00:13:17.398 }, 00:13:17.398 { 00:13:17.398 "subsystem": "nbd", 00:13:17.398 "config": [] 00:13:17.398 } 00:13:17.398 ] 00:13:17.398 }' 00:13:17.398 07:22:39 -- target/tls.sh@208 -- # killprocess 77845 00:13:17.398 07:22:39 -- common/autotest_common.sh@936 -- # '[' -z 77845 ']' 00:13:17.398 07:22:39 -- common/autotest_common.sh@940 -- # kill -0 77845 00:13:17.398 07:22:39 -- common/autotest_common.sh@941 -- # uname 00:13:17.398 07:22:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:17.398 07:22:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77845 00:13:17.398 killing process with pid 77845 00:13:17.398 Received shutdown signal, test time was about 10.000000 seconds 00:13:17.398 00:13:17.398 Latency(us) 00:13:17.398 [2024-11-28T07:22:39.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.398 [2024-11-28T07:22:39.673Z] =================================================================================================================== 00:13:17.398 [2024-11-28T07:22:39.673Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:17.398 07:22:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:17.398 07:22:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:17.398 07:22:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77845' 00:13:17.398 07:22:39 -- common/autotest_common.sh@955 -- # kill 77845 00:13:17.398 07:22:39 -- common/autotest_common.sh@960 -- # wait 77845 00:13:17.657 07:22:39 -- target/tls.sh@209 -- # killprocess 77796 00:13:17.657 07:22:39 -- common/autotest_common.sh@936 -- # '[' -z 77796 ']' 00:13:17.657 07:22:39 -- common/autotest_common.sh@940 -- # kill -0 77796 00:13:17.657 07:22:39 -- common/autotest_common.sh@941 -- # uname 00:13:17.657 07:22:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:17.657 07:22:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77796 00:13:17.657 killing process with pid 77796 00:13:17.657 07:22:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:17.657 07:22:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:17.657 07:22:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77796' 00:13:17.657 07:22:39 -- common/autotest_common.sh@955 -- # kill 77796 00:13:17.657 07:22:39 -- common/autotest_common.sh@960 -- # wait 77796 00:13:17.917 07:22:40 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:17.917 07:22:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:17.917 07:22:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:17.917 07:22:40 -- target/tls.sh@212 -- # echo '{ 00:13:17.917 "subsystems": [ 00:13:17.917 { 00:13:17.917 "subsystem": "iobuf", 00:13:17.917 "config": [ 00:13:17.917 { 00:13:17.917 "method": "iobuf_set_options", 00:13:17.917 "params": { 00:13:17.917 "small_pool_count": 8192, 00:13:17.917 "large_pool_count": 1024, 00:13:17.917 "small_bufsize": 8192, 00:13:17.917 "large_bufsize": 135168 00:13:17.917 } 00:13:17.917 } 00:13:17.917 ] 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "subsystem": "sock", 00:13:17.917 "config": [ 00:13:17.917 { 00:13:17.917 "method": "sock_impl_set_options", 00:13:17.917 "params": { 00:13:17.917 "impl_name": "uring", 00:13:17.917 "recv_buf_size": 2097152, 00:13:17.917 "send_buf_size": 2097152, 00:13:17.917 "enable_recv_pipe": true, 00:13:17.917 "enable_quickack": false, 00:13:17.917 "enable_placement_id": 0, 00:13:17.917 "enable_zerocopy_send_server": false, 00:13:17.917 "enable_zerocopy_send_client": false, 00:13:17.917 "zerocopy_threshold": 0, 00:13:17.917 "tls_version": 0, 00:13:17.917 "enable_ktls": false 00:13:17.917 } 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "method": "sock_impl_set_options", 00:13:17.917 "params": { 00:13:17.917 "impl_name": "posix", 00:13:17.917 "recv_buf_size": 2097152, 00:13:17.917 "send_buf_size": 2097152, 00:13:17.917 "enable_recv_pipe": true, 00:13:17.917 "enable_quickack": false, 00:13:17.917 "enable_placement_id": 0, 00:13:17.917 "enable_zerocopy_send_server": true, 00:13:17.917 "enable_zerocopy_send_client": false, 00:13:17.917 "zerocopy_threshold": 0, 00:13:17.917 "tls_version": 0, 00:13:17.917 "enable_ktls": false 00:13:17.917 } 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "method": "sock_impl_set_options", 00:13:17.917 "params": { 00:13:17.917 "impl_name": "ssl", 00:13:17.917 "recv_buf_size": 4096, 00:13:17.917 "send_buf_size": 4096, 00:13:17.917 "enable_recv_pipe": true, 00:13:17.917 "enable_quickack": false, 00:13:17.917 "enable_placement_id": 0, 00:13:17.917 "enable_zerocopy_send_server": true, 00:13:17.917 "enable_zerocopy_send_client": false, 00:13:17.917 "zerocopy_threshold": 0, 00:13:17.917 "tls_version": 0, 00:13:17.917 "enable_ktls": false 00:13:17.917 } 00:13:17.917 } 00:13:17.917 ] 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "subsystem": "vmd", 00:13:17.917 "config": [] 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "subsystem": "accel", 00:13:17.917 "config": [ 00:13:17.917 { 00:13:17.917 "method": "accel_set_options", 00:13:17.917 "params": { 00:13:17.917 "small_cache_size": 128, 00:13:17.917 "large_cache_size": 16, 00:13:17.917 "task_count": 2048, 00:13:17.917 "sequence_count": 2048, 00:13:17.917 "buf_count": 2048 00:13:17.917 } 00:13:17.917 } 00:13:17.917 ] 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "subsystem": "bdev", 00:13:17.917 "config": [ 00:13:17.917 { 00:13:17.917 "method": "bdev_set_options", 00:13:17.917 "params": { 00:13:17.917 "bdev_io_pool_size": 65535, 00:13:17.917 "bdev_io_cache_size": 256, 00:13:17.917 "bdev_auto_examine": true, 00:13:17.917 "iobuf_small_cache_size": 128, 00:13:17.917 "iobuf_large_cache_size": 16 00:13:17.917 } 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "method": "bdev_raid_set_options", 00:13:17.917 "params": { 00:13:17.917 "process_window_size_kb": 1024 00:13:17.917 } 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "method": "bdev_iscsi_set_options", 00:13:17.917 "params": { 00:13:17.917 "timeout_sec": 30 00:13:17.917 } 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "method": "bdev_nvme_set_options", 00:13:17.917 "params": { 00:13:17.917 "action_on_timeout": "none", 00:13:17.917 "timeout_us": 0, 00:13:17.917 "timeout_admin_us": 0, 00:13:17.917 "keep_alive_timeout_ms": 10000, 00:13:17.917 "transport_retry_count": 4, 00:13:17.917 "arbitration_burst": 0, 00:13:17.917 "low_priority_weight": 0, 00:13:17.917 "medium_priority_weight": 0, 00:13:17.917 "high_priority_weight": 0, 00:13:17.917 "nvme_adminq_poll_period_us": 10000, 00:13:17.917 "nvme_ioq_poll_period_us": 0, 00:13:17.917 "io_queue_requests": 0, 00:13:17.917 "delay_cmd_submit": true, 00:13:17.917 "bdev_retry_count": 3, 00:13:17.917 "transport_ack_timeout": 0, 00:13:17.917 "ctrlr_loss_timeout_sec": 0, 00:13:17.917 "reconnect_delay_sec": 0, 00:13:17.917 "fast_io_fail_timeout_sec": 0, 00:13:17.917 "generate_uuids": false, 00:13:17.917 "transport_tos": 0, 00:13:17.917 "io_path_stat": false, 00:13:17.917 "allow_accel_sequence": false 00:13:17.917 } 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "method": "bdev_nvme_set_hotplug", 00:13:17.917 "params": { 00:13:17.917 "period_us": 100000, 00:13:17.917 "enable": false 00:13:17.917 } 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "method": "bdev_malloc_create", 00:13:17.917 "params": { 00:13:17.917 "name": "malloc0", 00:13:17.917 "num_blocks": 8192, 00:13:17.917 "block_size": 4096, 00:13:17.917 "physical_block_size": 4096, 00:13:17.917 "uuid": "df863cd3-d5f9-4c00-b306-031ad8e9966f", 00:13:17.917 "optimal_io_boundary": 0 00:13:17.917 } 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "method": "bdev_wait_for_examine" 00:13:17.917 } 00:13:17.917 ] 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "subsystem": "nbd", 00:13:17.917 "config": [] 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "subsystem": "scheduler", 00:13:17.917 "config": [ 00:13:17.917 { 00:13:17.917 "method": "framework_set_scheduler", 00:13:17.917 "params": { 00:13:17.917 "name": "static" 00:13:17.917 } 00:13:17.917 } 00:13:17.917 ] 00:13:17.917 }, 00:13:17.917 { 00:13:17.917 "subsystem": "nvmf", 00:13:17.917 "config": [ 00:13:17.917 { 00:13:17.917 "method": "nvmf_set_config", 00:13:17.917 "params": { 00:13:17.917 "discovery_filter": "match_any", 00:13:17.917 "admin_cmd_passthru": { 00:13:17.917 "identify_ctrlr": false 00:13:17.918 } 00:13:17.918 } 00:13:17.918 }, 00:13:17.918 { 00:13:17.918 "method": "nvmf_set_max_subsystems", 00:13:17.918 "params": { 00:13:17.918 "max_subsystems": 1024 00:13:17.918 } 00:13:17.918 }, 00:13:17.918 { 00:13:17.918 "method": "nvmf_set_crdt", 00:13:17.918 "params": { 00:13:17.918 "crdt1": 0, 00:13:17.918 "crdt2": 0, 00:13:17.918 "crdt3": 0 00:13:17.918 } 00:13:17.918 }, 00:13:17.918 { 00:13:17.918 "method": "nvmf_create_transport", 00:13:17.918 "params": { 00:13:17.918 "trtype": "TCP", 00:13:17.918 "max_queue_depth": 128, 00:13:17.918 "max_io_qpairs_per_ctrlr": 127, 00:13:17.918 "in_capsule_data_size": 4096, 00:13:17.918 "max_io_size": 131072, 00:13:17.918 "io_unit_size": 131072, 00:13:17.918 "max_aq_depth": 128, 00:13:17.918 "num_shared_buffers": 511, 00:13:17.918 "buf_cache_size": 4294967295, 00:13:17.918 "dif_insert_or_strip": false, 00:13:17.918 "zcopy": false, 00:13:17.918 "c2h_success": false, 00:13:17.918 "sock_priority": 0, 00:13:17.918 "abort_timeout_sec": 1 00:13:17.918 } 00:13:17.918 }, 00:13:17.918 { 00:13:17.918 "method": "nvmf_create_subsystem", 00:13:17.918 "params": { 00:13:17.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.918 "allow_any_host": false, 00:13:17.918 "serial_number": "SPDK00000000000001", 00:13:17.918 "model_number": "SPDK bdev Controller", 00:13:17.918 "max_namespaces": 10, 00:13:17.918 "min_cntlid": 1, 00:13:17.918 "max_cntlid": 65519, 00:13:17.918 "ana_reporting": false 00:13:17.918 } 00:13:17.918 }, 00:13:17.918 { 00:13:17.918 "method": "nvmf_subsystem_add_host", 00:13:17.918 "params": { 00:13:17.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.918 "host": "nqn.2016-06.io.spdk:host1", 00:13:17.918 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:13:17.918 } 00:13:17.918 }, 00:13:17.918 { 00:13:17.918 "method": "nvmf_subsystem_add_ns", 00:13:17.918 "params": { 00:13:17.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.918 "namespace": { 00:13:17.918 "nsid": 1, 00:13:17.918 "bdev_name": "malloc0", 00:13:17.918 "nguid": "DF863CD3D5F94C00B306031AD8E9966F", 00:13:17.918 "uuid": "df863cd3-d5f9-4c00-b306-031ad8e9966f" 00:13:17.918 } 00:13:17.918 } 00:13:17.918 }, 00:13:17.918 { 00:13:17.918 "method": "nvmf_subsystem_add_listener", 00:13:17.918 "params": { 00:13:17.918 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.918 "listen_address": { 00:13:17.918 "trtype": "TCP", 00:13:17.918 "adrfam": "IPv4", 00:13:17.918 "traddr": "10.0.0.2", 00:13:17.918 "trsvcid": "4420" 00:13:17.918 }, 00:13:17.918 "secure_channel": true 00:13:17.918 } 00:13:17.918 } 00:13:17.918 ] 00:13:17.918 } 00:13:17.918 ] 00:13:17.918 }' 00:13:17.918 07:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:18.177 07:22:40 -- nvmf/common.sh@469 -- # nvmfpid=77894 00:13:18.177 07:22:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:18.177 07:22:40 -- nvmf/common.sh@470 -- # waitforlisten 77894 00:13:18.177 07:22:40 -- common/autotest_common.sh@829 -- # '[' -z 77894 ']' 00:13:18.177 07:22:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.177 07:22:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.177 07:22:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.177 07:22:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.177 07:22:40 -- common/autotest_common.sh@10 -- # set +x 00:13:18.177 [2024-11-28 07:22:40.242979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:18.177 [2024-11-28 07:22:40.243356] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.177 [2024-11-28 07:22:40.381033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.436 [2024-11-28 07:22:40.513235] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:18.436 [2024-11-28 07:22:40.513461] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.436 [2024-11-28 07:22:40.513499] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.436 [2024-11-28 07:22:40.513513] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.436 [2024-11-28 07:22:40.513543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.695 [2024-11-28 07:22:40.784081] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.695 [2024-11-28 07:22:40.816030] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:18.695 [2024-11-28 07:22:40.816340] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.952 07:22:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:18.952 07:22:41 -- common/autotest_common.sh@862 -- # return 0 00:13:18.952 07:22:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:18.952 07:22:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:18.952 07:22:41 -- common/autotest_common.sh@10 -- # set +x 00:13:19.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:19.211 07:22:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.211 07:22:41 -- target/tls.sh@216 -- # bdevperf_pid=77926 00:13:19.211 07:22:41 -- target/tls.sh@217 -- # waitforlisten 77926 /var/tmp/bdevperf.sock 00:13:19.211 07:22:41 -- common/autotest_common.sh@829 -- # '[' -z 77926 ']' 00:13:19.211 07:22:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:19.211 07:22:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.211 07:22:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:19.211 07:22:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.211 07:22:41 -- common/autotest_common.sh@10 -- # set +x 00:13:19.211 07:22:41 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:19.211 07:22:41 -- target/tls.sh@213 -- # echo '{ 00:13:19.211 "subsystems": [ 00:13:19.211 { 00:13:19.211 "subsystem": "iobuf", 00:13:19.211 "config": [ 00:13:19.211 { 00:13:19.211 "method": "iobuf_set_options", 00:13:19.211 "params": { 00:13:19.211 "small_pool_count": 8192, 00:13:19.211 "large_pool_count": 1024, 00:13:19.211 "small_bufsize": 8192, 00:13:19.211 "large_bufsize": 135168 00:13:19.211 } 00:13:19.211 } 00:13:19.211 ] 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "subsystem": "sock", 00:13:19.211 "config": [ 00:13:19.211 { 00:13:19.211 "method": "sock_impl_set_options", 00:13:19.211 "params": { 00:13:19.211 "impl_name": "uring", 00:13:19.211 "recv_buf_size": 2097152, 00:13:19.211 "send_buf_size": 2097152, 00:13:19.211 "enable_recv_pipe": true, 00:13:19.211 "enable_quickack": false, 00:13:19.211 "enable_placement_id": 0, 00:13:19.211 "enable_zerocopy_send_server": false, 00:13:19.211 "enable_zerocopy_send_client": false, 00:13:19.211 "zerocopy_threshold": 0, 00:13:19.211 "tls_version": 0, 00:13:19.211 "enable_ktls": false 00:13:19.211 } 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "method": "sock_impl_set_options", 00:13:19.211 "params": { 00:13:19.211 "impl_name": "posix", 00:13:19.211 "recv_buf_size": 2097152, 00:13:19.211 "send_buf_size": 2097152, 00:13:19.211 "enable_recv_pipe": true, 00:13:19.211 "enable_quickack": false, 00:13:19.211 "enable_placement_id": 0, 00:13:19.211 "enable_zerocopy_send_server": true, 00:13:19.211 "enable_zerocopy_send_client": false, 00:13:19.211 "zerocopy_threshold": 0, 00:13:19.211 "tls_version": 0, 00:13:19.211 "enable_ktls": false 00:13:19.211 } 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "method": "sock_impl_set_options", 00:13:19.211 "params": { 00:13:19.211 "impl_name": "ssl", 00:13:19.211 "recv_buf_size": 4096, 00:13:19.211 "send_buf_size": 4096, 00:13:19.211 "enable_recv_pipe": true, 00:13:19.211 "enable_quickack": false, 00:13:19.211 "enable_placement_id": 0, 00:13:19.211 "enable_zerocopy_send_server": true, 00:13:19.211 "enable_zerocopy_send_client": false, 00:13:19.211 "zerocopy_threshold": 0, 00:13:19.211 "tls_version": 0, 00:13:19.211 "enable_ktls": false 00:13:19.211 } 00:13:19.211 } 00:13:19.211 ] 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "subsystem": "vmd", 00:13:19.211 "config": [] 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "subsystem": "accel", 00:13:19.211 "config": [ 00:13:19.211 { 00:13:19.211 "method": "accel_set_options", 00:13:19.211 "params": { 00:13:19.211 "small_cache_size": 128, 00:13:19.211 "large_cache_size": 16, 00:13:19.211 "task_count": 2048, 00:13:19.211 "sequence_count": 2048, 00:13:19.211 "buf_count": 2048 00:13:19.211 } 00:13:19.211 } 00:13:19.211 ] 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "subsystem": "bdev", 00:13:19.211 "config": [ 00:13:19.211 { 00:13:19.211 "method": "bdev_set_options", 00:13:19.211 "params": { 00:13:19.211 "bdev_io_pool_size": 65535, 00:13:19.211 "bdev_io_cache_size": 256, 00:13:19.211 "bdev_auto_examine": true, 00:13:19.211 "iobuf_small_cache_size": 128, 00:13:19.211 "iobuf_large_cache_size": 16 00:13:19.211 } 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "method": "bdev_raid_set_options", 00:13:19.211 "params": { 00:13:19.211 "process_window_size_kb": 1024 00:13:19.211 } 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "method": "bdev_iscsi_set_options", 00:13:19.211 "params": { 00:13:19.211 "timeout_sec": 30 00:13:19.211 } 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "method": "bdev_nvme_set_options", 00:13:19.211 "params": { 00:13:19.211 "action_on_timeout": "none", 00:13:19.211 "timeout_us": 0, 00:13:19.211 "timeout_admin_us": 0, 00:13:19.211 "keep_alive_timeout_ms": 10000, 00:13:19.211 "transport_retry_count": 4, 00:13:19.211 "arbitration_burst": 0, 00:13:19.211 "low_priority_weight": 0, 00:13:19.211 "medium_priority_weight": 0, 00:13:19.211 "high_priority_weight": 0, 00:13:19.211 "nvme_adminq_poll_period_us": 10000, 00:13:19.211 "nvme_ioq_poll_period_us": 0, 00:13:19.211 "io_queue_requests": 512, 00:13:19.211 "delay_cmd_submit": true, 00:13:19.211 "bdev_retry_count": 3, 00:13:19.211 "transport_ack_timeout": 0, 00:13:19.211 "ctrlr_loss_timeout_sec": 0, 00:13:19.211 "reconnect_delay_sec": 0, 00:13:19.211 "fast_io_fail_timeout_sec": 0, 00:13:19.211 "generate_uuids": false, 00:13:19.211 "transport_tos": 0, 00:13:19.211 "io_path_stat": false, 00:13:19.211 "allow_accel_sequence": false 00:13:19.211 } 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "method": "bdev_nvme_attach_controller", 00:13:19.211 "params": { 00:13:19.211 "name": "TLSTEST", 00:13:19.211 "trtype": "TCP", 00:13:19.211 "adrfam": "IPv4", 00:13:19.211 "traddr": "10.0.0.2", 00:13:19.211 "trsvcid": "4420", 00:13:19.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:19.211 "prchk_reftag": false, 00:13:19.211 "prchk_guard": false, 00:13:19.211 "ctrlr_loss_timeout_sec": 0, 00:13:19.211 "reconnect_delay_sec": 0, 00:13:19.211 "fast_io_fail_timeout_sec": 0, 00:13:19.211 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:13:19.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:19.211 "hdgst": false, 00:13:19.211 "ddgst": false 00:13:19.211 } 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "method": "bdev_nvme_set_hotplug", 00:13:19.211 "params": { 00:13:19.211 "period_us": 100000, 00:13:19.211 "enable": false 00:13:19.211 } 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "method": "bdev_wait_for_examine" 00:13:19.211 } 00:13:19.211 ] 00:13:19.211 }, 00:13:19.211 { 00:13:19.211 "subsystem": "nbd", 00:13:19.211 "config": [] 00:13:19.211 } 00:13:19.211 ] 00:13:19.211 }' 00:13:19.211 [2024-11-28 07:22:41.316965] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:19.211 [2024-11-28 07:22:41.317388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77926 ] 00:13:19.212 [2024-11-28 07:22:41.457867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.470 [2024-11-28 07:22:41.555145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.470 [2024-11-28 07:22:41.723187] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:20.409 07:22:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.409 07:22:42 -- common/autotest_common.sh@862 -- # return 0 00:13:20.409 07:22:42 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:20.409 Running I/O for 10 seconds... 00:13:30.446 00:13:30.446 Latency(us) 00:13:30.446 [2024-11-28T07:22:52.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.446 [2024-11-28T07:22:52.721Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:30.446 Verification LBA range: start 0x0 length 0x2000 00:13:30.446 TLSTESTn1 : 10.02 5610.52 21.92 0.00 0.00 22772.10 6017.40 26691.03 00:13:30.446 [2024-11-28T07:22:52.721Z] =================================================================================================================== 00:13:30.446 [2024-11-28T07:22:52.721Z] Total : 5610.52 21.92 0.00 0.00 22772.10 6017.40 26691.03 00:13:30.446 0 00:13:30.446 07:22:52 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:30.446 07:22:52 -- target/tls.sh@223 -- # killprocess 77926 00:13:30.446 07:22:52 -- common/autotest_common.sh@936 -- # '[' -z 77926 ']' 00:13:30.446 07:22:52 -- common/autotest_common.sh@940 -- # kill -0 77926 00:13:30.446 07:22:52 -- common/autotest_common.sh@941 -- # uname 00:13:30.446 07:22:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:30.446 07:22:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77926 00:13:30.446 07:22:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:30.446 07:22:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:30.446 killing process with pid 77926 00:13:30.446 07:22:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77926' 00:13:30.446 Received shutdown signal, test time was about 10.000000 seconds 00:13:30.446 00:13:30.446 Latency(us) 00:13:30.446 [2024-11-28T07:22:52.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.446 [2024-11-28T07:22:52.721Z] =================================================================================================================== 00:13:30.446 [2024-11-28T07:22:52.721Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:30.447 07:22:52 -- common/autotest_common.sh@955 -- # kill 77926 00:13:30.447 07:22:52 -- common/autotest_common.sh@960 -- # wait 77926 00:13:30.706 07:22:52 -- target/tls.sh@224 -- # killprocess 77894 00:13:30.706 07:22:52 -- common/autotest_common.sh@936 -- # '[' -z 77894 ']' 00:13:30.706 07:22:52 -- common/autotest_common.sh@940 -- # kill -0 77894 00:13:30.706 07:22:52 -- common/autotest_common.sh@941 -- # uname 00:13:30.706 07:22:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:30.706 07:22:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77894 00:13:30.706 07:22:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:30.706 killing process with pid 77894 00:13:30.706 07:22:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:30.706 07:22:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77894' 00:13:30.706 07:22:52 -- common/autotest_common.sh@955 -- # kill 77894 00:13:30.706 07:22:52 -- common/autotest_common.sh@960 -- # wait 77894 00:13:30.966 07:22:53 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:13:30.966 07:22:53 -- target/tls.sh@227 -- # cleanup 00:13:30.966 07:22:53 -- target/tls.sh@15 -- # process_shm --id 0 00:13:30.966 07:22:53 -- common/autotest_common.sh@806 -- # type=--id 00:13:30.966 07:22:53 -- common/autotest_common.sh@807 -- # id=0 00:13:30.966 07:22:53 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:30.966 07:22:53 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:30.966 07:22:53 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:30.966 07:22:53 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:30.966 07:22:53 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:30.966 07:22:53 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:30.966 nvmf_trace.0 00:13:30.966 07:22:53 -- common/autotest_common.sh@821 -- # return 0 00:13:30.966 07:22:53 -- target/tls.sh@16 -- # killprocess 77926 00:13:30.966 07:22:53 -- common/autotest_common.sh@936 -- # '[' -z 77926 ']' 00:13:30.966 07:22:53 -- common/autotest_common.sh@940 -- # kill -0 77926 00:13:30.966 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77926) - No such process 00:13:30.966 Process with pid 77926 is not found 00:13:30.966 07:22:53 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77926 is not found' 00:13:30.966 07:22:53 -- target/tls.sh@17 -- # nvmftestfini 00:13:30.966 07:22:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:30.966 07:22:53 -- nvmf/common.sh@116 -- # sync 00:13:30.966 07:22:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:30.966 07:22:53 -- nvmf/common.sh@119 -- # set +e 00:13:30.966 07:22:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:30.966 07:22:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:30.966 rmmod nvme_tcp 00:13:30.966 rmmod nvme_fabrics 00:13:30.966 rmmod nvme_keyring 00:13:30.966 07:22:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:30.966 07:22:53 -- nvmf/common.sh@123 -- # set -e 00:13:30.966 07:22:53 -- nvmf/common.sh@124 -- # return 0 00:13:30.966 07:22:53 -- nvmf/common.sh@477 -- # '[' -n 77894 ']' 00:13:30.966 07:22:53 -- nvmf/common.sh@478 -- # killprocess 77894 00:13:30.966 07:22:53 -- common/autotest_common.sh@936 -- # '[' -z 77894 ']' 00:13:30.966 07:22:53 -- common/autotest_common.sh@940 -- # kill -0 77894 00:13:30.966 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77894) - No such process 00:13:30.966 Process with pid 77894 is not found 00:13:30.966 07:22:53 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77894 is not found' 00:13:30.966 07:22:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:30.966 07:22:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:30.966 07:22:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:30.966 07:22:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.966 07:22:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:30.966 07:22:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.966 07:22:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.966 07:22:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.966 07:22:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:31.227 07:22:53 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:31.227 00:13:31.227 real 1m12.438s 00:13:31.227 user 1m51.376s 00:13:31.227 sys 0m25.303s 00:13:31.227 07:22:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:31.227 07:22:53 -- common/autotest_common.sh@10 -- # set +x 00:13:31.227 ************************************ 00:13:31.227 END TEST nvmf_tls 00:13:31.227 ************************************ 00:13:31.227 07:22:53 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:31.227 07:22:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:31.227 07:22:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:31.227 07:22:53 -- common/autotest_common.sh@10 -- # set +x 00:13:31.227 ************************************ 00:13:31.227 START TEST nvmf_fips 00:13:31.227 ************************************ 00:13:31.227 07:22:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:31.227 * Looking for test storage... 00:13:31.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:31.227 07:22:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:31.227 07:22:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:31.227 07:22:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:31.227 07:22:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:31.227 07:22:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:31.227 07:22:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:31.227 07:22:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:31.227 07:22:53 -- scripts/common.sh@335 -- # IFS=.-: 00:13:31.227 07:22:53 -- scripts/common.sh@335 -- # read -ra ver1 00:13:31.227 07:22:53 -- scripts/common.sh@336 -- # IFS=.-: 00:13:31.227 07:22:53 -- scripts/common.sh@336 -- # read -ra ver2 00:13:31.227 07:22:53 -- scripts/common.sh@337 -- # local 'op=<' 00:13:31.227 07:22:53 -- scripts/common.sh@339 -- # ver1_l=2 00:13:31.227 07:22:53 -- scripts/common.sh@340 -- # ver2_l=1 00:13:31.227 07:22:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:31.227 07:22:53 -- scripts/common.sh@343 -- # case "$op" in 00:13:31.227 07:22:53 -- scripts/common.sh@344 -- # : 1 00:13:31.227 07:22:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:31.227 07:22:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:31.227 07:22:53 -- scripts/common.sh@364 -- # decimal 1 00:13:31.227 07:22:53 -- scripts/common.sh@352 -- # local d=1 00:13:31.227 07:22:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:31.227 07:22:53 -- scripts/common.sh@354 -- # echo 1 00:13:31.227 07:22:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:31.227 07:22:53 -- scripts/common.sh@365 -- # decimal 2 00:13:31.227 07:22:53 -- scripts/common.sh@352 -- # local d=2 00:13:31.227 07:22:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:31.227 07:22:53 -- scripts/common.sh@354 -- # echo 2 00:13:31.227 07:22:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:31.227 07:22:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:31.227 07:22:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:31.227 07:22:53 -- scripts/common.sh@367 -- # return 0 00:13:31.227 07:22:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:31.227 07:22:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:31.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.227 --rc genhtml_branch_coverage=1 00:13:31.227 --rc genhtml_function_coverage=1 00:13:31.227 --rc genhtml_legend=1 00:13:31.227 --rc geninfo_all_blocks=1 00:13:31.227 --rc geninfo_unexecuted_blocks=1 00:13:31.227 00:13:31.227 ' 00:13:31.227 07:22:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:31.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.227 --rc genhtml_branch_coverage=1 00:13:31.227 --rc genhtml_function_coverage=1 00:13:31.227 --rc genhtml_legend=1 00:13:31.227 --rc geninfo_all_blocks=1 00:13:31.227 --rc geninfo_unexecuted_blocks=1 00:13:31.227 00:13:31.227 ' 00:13:31.227 07:22:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:31.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.227 --rc genhtml_branch_coverage=1 00:13:31.227 --rc genhtml_function_coverage=1 00:13:31.227 --rc genhtml_legend=1 00:13:31.227 --rc geninfo_all_blocks=1 00:13:31.227 --rc geninfo_unexecuted_blocks=1 00:13:31.227 00:13:31.227 ' 00:13:31.227 07:22:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:31.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.227 --rc genhtml_branch_coverage=1 00:13:31.227 --rc genhtml_function_coverage=1 00:13:31.227 --rc genhtml_legend=1 00:13:31.227 --rc geninfo_all_blocks=1 00:13:31.227 --rc geninfo_unexecuted_blocks=1 00:13:31.227 00:13:31.227 ' 00:13:31.227 07:22:53 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:31.227 07:22:53 -- nvmf/common.sh@7 -- # uname -s 00:13:31.227 07:22:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.227 07:22:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.227 07:22:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.227 07:22:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.227 07:22:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.227 07:22:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.227 07:22:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.227 07:22:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.227 07:22:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.227 07:22:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.227 07:22:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:13:31.227 07:22:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:13:31.227 07:22:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.227 07:22:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.227 07:22:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:31.227 07:22:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:31.227 07:22:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.227 07:22:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.227 07:22:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.227 07:22:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.227 07:22:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.227 07:22:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.227 07:22:53 -- paths/export.sh@5 -- # export PATH 00:13:31.227 07:22:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.227 07:22:53 -- nvmf/common.sh@46 -- # : 0 00:13:31.227 07:22:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:31.227 07:22:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:31.227 07:22:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:31.227 07:22:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.227 07:22:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.227 07:22:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:31.227 07:22:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:31.227 07:22:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:31.227 07:22:53 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:31.227 07:22:53 -- fips/fips.sh@89 -- # check_openssl_version 00:13:31.227 07:22:53 -- fips/fips.sh@83 -- # local target=3.0.0 00:13:31.227 07:22:53 -- fips/fips.sh@85 -- # openssl version 00:13:31.227 07:22:53 -- fips/fips.sh@85 -- # awk '{print $2}' 00:13:31.487 07:22:53 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:13:31.487 07:22:53 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:13:31.487 07:22:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:31.487 07:22:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:31.487 07:22:53 -- scripts/common.sh@335 -- # IFS=.-: 00:13:31.487 07:22:53 -- scripts/common.sh@335 -- # read -ra ver1 00:13:31.487 07:22:53 -- scripts/common.sh@336 -- # IFS=.-: 00:13:31.487 07:22:53 -- scripts/common.sh@336 -- # read -ra ver2 00:13:31.487 07:22:53 -- scripts/common.sh@337 -- # local 'op=>=' 00:13:31.487 07:22:53 -- scripts/common.sh@339 -- # ver1_l=3 00:13:31.487 07:22:53 -- scripts/common.sh@340 -- # ver2_l=3 00:13:31.487 07:22:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:31.487 07:22:53 -- scripts/common.sh@343 -- # case "$op" in 00:13:31.487 07:22:53 -- scripts/common.sh@347 -- # : 1 00:13:31.487 07:22:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:31.487 07:22:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:31.487 07:22:53 -- scripts/common.sh@364 -- # decimal 3 00:13:31.487 07:22:53 -- scripts/common.sh@352 -- # local d=3 00:13:31.487 07:22:53 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:31.487 07:22:53 -- scripts/common.sh@354 -- # echo 3 00:13:31.487 07:22:53 -- scripts/common.sh@364 -- # ver1[v]=3 00:13:31.487 07:22:53 -- scripts/common.sh@365 -- # decimal 3 00:13:31.487 07:22:53 -- scripts/common.sh@352 -- # local d=3 00:13:31.487 07:22:53 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:31.487 07:22:53 -- scripts/common.sh@354 -- # echo 3 00:13:31.487 07:22:53 -- scripts/common.sh@365 -- # ver2[v]=3 00:13:31.487 07:22:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:31.487 07:22:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:31.487 07:22:53 -- scripts/common.sh@363 -- # (( v++ )) 00:13:31.487 07:22:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:31.487 07:22:53 -- scripts/common.sh@364 -- # decimal 1 00:13:31.487 07:22:53 -- scripts/common.sh@352 -- # local d=1 00:13:31.487 07:22:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:31.487 07:22:53 -- scripts/common.sh@354 -- # echo 1 00:13:31.487 07:22:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:31.487 07:22:53 -- scripts/common.sh@365 -- # decimal 0 00:13:31.487 07:22:53 -- scripts/common.sh@352 -- # local d=0 00:13:31.487 07:22:53 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:31.487 07:22:53 -- scripts/common.sh@354 -- # echo 0 00:13:31.487 07:22:53 -- scripts/common.sh@365 -- # ver2[v]=0 00:13:31.487 07:22:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:31.487 07:22:53 -- scripts/common.sh@366 -- # return 0 00:13:31.487 07:22:53 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:13:31.487 07:22:53 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:31.487 07:22:53 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:13:31.487 07:22:53 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:31.487 07:22:53 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:31.487 07:22:53 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:13:31.487 07:22:53 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:13:31.487 07:22:53 -- fips/fips.sh@113 -- # build_openssl_config 00:13:31.487 07:22:53 -- fips/fips.sh@37 -- # cat 00:13:31.487 07:22:53 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:13:31.487 07:22:53 -- fips/fips.sh@58 -- # cat - 00:13:31.487 07:22:53 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:31.487 07:22:53 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:13:31.487 07:22:53 -- fips/fips.sh@116 -- # mapfile -t providers 00:13:31.487 07:22:53 -- fips/fips.sh@116 -- # openssl list -providers 00:13:31.487 07:22:53 -- fips/fips.sh@116 -- # grep name 00:13:31.487 07:22:53 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:13:31.487 07:22:53 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:13:31.487 07:22:53 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:31.487 07:22:53 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:13:31.487 07:22:53 -- fips/fips.sh@127 -- # : 00:13:31.487 07:22:53 -- common/autotest_common.sh@650 -- # local es=0 00:13:31.487 07:22:53 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:31.487 07:22:53 -- common/autotest_common.sh@638 -- # local arg=openssl 00:13:31.487 07:22:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.487 07:22:53 -- common/autotest_common.sh@642 -- # type -t openssl 00:13:31.487 07:22:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.487 07:22:53 -- common/autotest_common.sh@644 -- # type -P openssl 00:13:31.487 07:22:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.487 07:22:53 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:13:31.487 07:22:53 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:13:31.487 07:22:53 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:13:31.487 Error setting digest 00:13:31.487 40D2B8B5C37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:13:31.487 40D2B8B5C37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:13:31.487 07:22:53 -- common/autotest_common.sh@653 -- # es=1 00:13:31.487 07:22:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:31.487 07:22:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:31.487 07:22:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:31.487 07:22:53 -- fips/fips.sh@130 -- # nvmftestinit 00:13:31.487 07:22:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:31.487 07:22:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:31.487 07:22:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:31.487 07:22:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:31.487 07:22:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:31.487 07:22:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.487 07:22:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:31.487 07:22:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:31.487 07:22:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:31.487 07:22:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:31.487 07:22:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:31.487 07:22:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:31.487 07:22:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:31.487 07:22:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:31.488 07:22:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.488 07:22:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.488 07:22:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:31.488 07:22:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:31.488 07:22:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:31.488 07:22:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:31.488 07:22:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:31.488 07:22:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.488 07:22:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:31.488 07:22:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:31.488 07:22:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:31.488 07:22:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:31.488 07:22:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:31.488 07:22:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:31.488 Cannot find device "nvmf_tgt_br" 00:13:31.488 07:22:53 -- nvmf/common.sh@154 -- # true 00:13:31.488 07:22:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:31.488 Cannot find device "nvmf_tgt_br2" 00:13:31.488 07:22:53 -- nvmf/common.sh@155 -- # true 00:13:31.488 07:22:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:31.488 07:22:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:31.488 Cannot find device "nvmf_tgt_br" 00:13:31.488 07:22:53 -- nvmf/common.sh@157 -- # true 00:13:31.488 07:22:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:31.488 Cannot find device "nvmf_tgt_br2" 00:13:31.488 07:22:53 -- nvmf/common.sh@158 -- # true 00:13:31.488 07:22:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:31.771 07:22:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:31.771 07:22:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:31.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.771 07:22:53 -- nvmf/common.sh@161 -- # true 00:13:31.771 07:22:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:31.771 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:31.771 07:22:53 -- nvmf/common.sh@162 -- # true 00:13:31.771 07:22:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:31.771 07:22:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:31.771 07:22:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:31.771 07:22:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:31.771 07:22:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:31.771 07:22:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:31.771 07:22:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:31.771 07:22:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:31.771 07:22:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:31.771 07:22:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:31.771 07:22:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:31.772 07:22:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:31.772 07:22:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:31.772 07:22:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:31.772 07:22:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:31.772 07:22:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:31.772 07:22:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:31.772 07:22:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:31.772 07:22:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:31.772 07:22:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:31.772 07:22:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:31.772 07:22:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:31.772 07:22:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:31.772 07:22:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:31.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:13:31.772 00:13:31.772 --- 10.0.0.2 ping statistics --- 00:13:31.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.772 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:13:31.772 07:22:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:31.772 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:31.772 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:13:31.772 00:13:31.772 --- 10.0.0.3 ping statistics --- 00:13:31.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.772 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:31.772 07:22:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:31.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:31.772 00:13:31.772 --- 10.0.0.1 ping statistics --- 00:13:31.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.772 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:31.772 07:22:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.772 07:22:53 -- nvmf/common.sh@421 -- # return 0 00:13:31.772 07:22:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:31.772 07:22:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.772 07:22:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:31.772 07:22:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:31.772 07:22:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.772 07:22:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:31.772 07:22:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:31.772 07:22:54 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:13:31.772 07:22:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:31.772 07:22:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.772 07:22:54 -- common/autotest_common.sh@10 -- # set +x 00:13:31.772 07:22:54 -- nvmf/common.sh@469 -- # nvmfpid=78286 00:13:31.772 07:22:54 -- nvmf/common.sh@470 -- # waitforlisten 78286 00:13:31.772 07:22:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:31.772 07:22:54 -- common/autotest_common.sh@829 -- # '[' -z 78286 ']' 00:13:31.772 07:22:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.772 07:22:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.772 07:22:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.772 07:22:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.772 07:22:54 -- common/autotest_common.sh@10 -- # set +x 00:13:32.031 [2024-11-28 07:22:54.085550] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:32.031 [2024-11-28 07:22:54.085660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.031 [2024-11-28 07:22:54.225912] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.291 [2024-11-28 07:22:54.318359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:32.291 [2024-11-28 07:22:54.318502] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.291 [2024-11-28 07:22:54.318515] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.291 [2024-11-28 07:22:54.318524] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.291 [2024-11-28 07:22:54.318551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.859 07:22:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.859 07:22:55 -- common/autotest_common.sh@862 -- # return 0 00:13:32.859 07:22:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:32.859 07:22:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.859 07:22:55 -- common/autotest_common.sh@10 -- # set +x 00:13:32.859 07:22:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.859 07:22:55 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:13:32.859 07:22:55 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:32.859 07:22:55 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:32.859 07:22:55 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:32.859 07:22:55 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:32.859 07:22:55 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:32.859 07:22:55 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:32.859 07:22:55 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:33.117 [2024-11-28 07:22:55.349661] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.117 [2024-11-28 07:22:55.365550] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:33.117 [2024-11-28 07:22:55.365806] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.375 malloc0 00:13:33.375 07:22:55 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:33.375 07:22:55 -- fips/fips.sh@147 -- # bdevperf_pid=78320 00:13:33.375 07:22:55 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:33.375 07:22:55 -- fips/fips.sh@148 -- # waitforlisten 78320 /var/tmp/bdevperf.sock 00:13:33.375 07:22:55 -- common/autotest_common.sh@829 -- # '[' -z 78320 ']' 00:13:33.375 07:22:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:33.375 07:22:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:33.375 07:22:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:33.375 07:22:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.375 07:22:55 -- common/autotest_common.sh@10 -- # set +x 00:13:33.375 [2024-11-28 07:22:55.489216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:33.375 [2024-11-28 07:22:55.489289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78320 ] 00:13:33.375 [2024-11-28 07:22:55.628063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.634 [2024-11-28 07:22:55.720845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.202 07:22:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.202 07:22:56 -- common/autotest_common.sh@862 -- # return 0 00:13:34.202 07:22:56 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:34.461 [2024-11-28 07:22:56.651527] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:34.461 TLSTESTn1 00:13:34.720 07:22:56 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:34.720 Running I/O for 10 seconds... 00:13:44.698 00:13:44.698 Latency(us) 00:13:44.698 [2024-11-28T07:23:06.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.698 [2024-11-28T07:23:06.973Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:44.698 Verification LBA range: start 0x0 length 0x2000 00:13:44.698 TLSTESTn1 : 10.02 5647.53 22.06 0.00 0.00 22627.13 5570.56 24188.74 00:13:44.698 [2024-11-28T07:23:06.973Z] =================================================================================================================== 00:13:44.698 [2024-11-28T07:23:06.973Z] Total : 5647.53 22.06 0.00 0.00 22627.13 5570.56 24188.74 00:13:44.698 0 00:13:44.698 07:23:06 -- fips/fips.sh@1 -- # cleanup 00:13:44.698 07:23:06 -- fips/fips.sh@15 -- # process_shm --id 0 00:13:44.698 07:23:06 -- common/autotest_common.sh@806 -- # type=--id 00:13:44.698 07:23:06 -- common/autotest_common.sh@807 -- # id=0 00:13:44.698 07:23:06 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:44.698 07:23:06 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:44.698 07:23:06 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:44.698 07:23:06 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:44.698 07:23:06 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:44.698 07:23:06 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:44.698 nvmf_trace.0 00:13:44.698 07:23:06 -- common/autotest_common.sh@821 -- # return 0 00:13:44.698 07:23:06 -- fips/fips.sh@16 -- # killprocess 78320 00:13:44.698 07:23:06 -- common/autotest_common.sh@936 -- # '[' -z 78320 ']' 00:13:44.698 07:23:06 -- common/autotest_common.sh@940 -- # kill -0 78320 00:13:44.698 07:23:06 -- common/autotest_common.sh@941 -- # uname 00:13:44.698 07:23:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:44.698 07:23:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78320 00:13:44.957 07:23:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:44.957 killing process with pid 78320 00:13:44.957 07:23:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:44.957 07:23:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78320' 00:13:44.957 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.957 00:13:44.957 Latency(us) 00:13:44.957 [2024-11-28T07:23:07.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.957 [2024-11-28T07:23:07.232Z] =================================================================================================================== 00:13:44.957 [2024-11-28T07:23:07.232Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:44.957 07:23:06 -- common/autotest_common.sh@955 -- # kill 78320 00:13:44.957 07:23:06 -- common/autotest_common.sh@960 -- # wait 78320 00:13:44.957 07:23:07 -- fips/fips.sh@17 -- # nvmftestfini 00:13:44.957 07:23:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:44.957 07:23:07 -- nvmf/common.sh@116 -- # sync 00:13:45.215 07:23:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:45.215 07:23:07 -- nvmf/common.sh@119 -- # set +e 00:13:45.215 07:23:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:45.215 07:23:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:45.215 rmmod nvme_tcp 00:13:45.215 rmmod nvme_fabrics 00:13:45.215 rmmod nvme_keyring 00:13:45.215 07:23:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:45.215 07:23:07 -- nvmf/common.sh@123 -- # set -e 00:13:45.215 07:23:07 -- nvmf/common.sh@124 -- # return 0 00:13:45.215 07:23:07 -- nvmf/common.sh@477 -- # '[' -n 78286 ']' 00:13:45.215 07:23:07 -- nvmf/common.sh@478 -- # killprocess 78286 00:13:45.215 07:23:07 -- common/autotest_common.sh@936 -- # '[' -z 78286 ']' 00:13:45.216 07:23:07 -- common/autotest_common.sh@940 -- # kill -0 78286 00:13:45.216 07:23:07 -- common/autotest_common.sh@941 -- # uname 00:13:45.216 07:23:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:45.216 07:23:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78286 00:13:45.216 07:23:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:45.216 07:23:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:45.216 killing process with pid 78286 00:13:45.216 07:23:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78286' 00:13:45.216 07:23:07 -- common/autotest_common.sh@955 -- # kill 78286 00:13:45.216 07:23:07 -- common/autotest_common.sh@960 -- # wait 78286 00:13:45.474 07:23:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:45.474 07:23:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:45.474 07:23:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:45.474 07:23:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:45.474 07:23:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:45.474 07:23:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.474 07:23:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.475 07:23:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.475 07:23:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:45.475 07:23:07 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:45.475 00:13:45.475 real 0m14.371s 00:13:45.475 user 0m18.699s 00:13:45.475 sys 0m6.291s 00:13:45.475 07:23:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:45.475 07:23:07 -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 ************************************ 00:13:45.475 END TEST nvmf_fips 00:13:45.475 ************************************ 00:13:45.475 07:23:07 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:13:45.475 07:23:07 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:13:45.475 07:23:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:45.475 07:23:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.475 07:23:07 -- common/autotest_common.sh@10 -- # set +x 00:13:45.475 ************************************ 00:13:45.475 START TEST nvmf_fuzz 00:13:45.475 ************************************ 00:13:45.475 07:23:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:13:45.734 * Looking for test storage... 00:13:45.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:45.734 07:23:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:45.734 07:23:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:45.734 07:23:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:45.734 07:23:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:45.734 07:23:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:45.734 07:23:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:45.734 07:23:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:45.734 07:23:07 -- scripts/common.sh@335 -- # IFS=.-: 00:13:45.734 07:23:07 -- scripts/common.sh@335 -- # read -ra ver1 00:13:45.734 07:23:07 -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.734 07:23:07 -- scripts/common.sh@336 -- # read -ra ver2 00:13:45.734 07:23:07 -- scripts/common.sh@337 -- # local 'op=<' 00:13:45.734 07:23:07 -- scripts/common.sh@339 -- # ver1_l=2 00:13:45.734 07:23:07 -- scripts/common.sh@340 -- # ver2_l=1 00:13:45.734 07:23:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:45.734 07:23:07 -- scripts/common.sh@343 -- # case "$op" in 00:13:45.734 07:23:07 -- scripts/common.sh@344 -- # : 1 00:13:45.734 07:23:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:45.734 07:23:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.735 07:23:07 -- scripts/common.sh@364 -- # decimal 1 00:13:45.735 07:23:07 -- scripts/common.sh@352 -- # local d=1 00:13:45.735 07:23:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.735 07:23:07 -- scripts/common.sh@354 -- # echo 1 00:13:45.735 07:23:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:45.735 07:23:07 -- scripts/common.sh@365 -- # decimal 2 00:13:45.735 07:23:07 -- scripts/common.sh@352 -- # local d=2 00:13:45.735 07:23:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.735 07:23:07 -- scripts/common.sh@354 -- # echo 2 00:13:45.735 07:23:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:45.735 07:23:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:45.735 07:23:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:45.735 07:23:07 -- scripts/common.sh@367 -- # return 0 00:13:45.735 07:23:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.735 07:23:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.735 --rc genhtml_branch_coverage=1 00:13:45.735 --rc genhtml_function_coverage=1 00:13:45.735 --rc genhtml_legend=1 00:13:45.735 --rc geninfo_all_blocks=1 00:13:45.735 --rc geninfo_unexecuted_blocks=1 00:13:45.735 00:13:45.735 ' 00:13:45.735 07:23:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.735 --rc genhtml_branch_coverage=1 00:13:45.735 --rc genhtml_function_coverage=1 00:13:45.735 --rc genhtml_legend=1 00:13:45.735 --rc geninfo_all_blocks=1 00:13:45.735 --rc geninfo_unexecuted_blocks=1 00:13:45.735 00:13:45.735 ' 00:13:45.735 07:23:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.735 --rc genhtml_branch_coverage=1 00:13:45.735 --rc genhtml_function_coverage=1 00:13:45.735 --rc genhtml_legend=1 00:13:45.735 --rc geninfo_all_blocks=1 00:13:45.735 --rc geninfo_unexecuted_blocks=1 00:13:45.735 00:13:45.735 ' 00:13:45.735 07:23:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.735 --rc genhtml_branch_coverage=1 00:13:45.735 --rc genhtml_function_coverage=1 00:13:45.735 --rc genhtml_legend=1 00:13:45.735 --rc geninfo_all_blocks=1 00:13:45.735 --rc geninfo_unexecuted_blocks=1 00:13:45.735 00:13:45.735 ' 00:13:45.735 07:23:07 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.735 07:23:07 -- nvmf/common.sh@7 -- # uname -s 00:13:45.735 07:23:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.735 07:23:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.735 07:23:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.735 07:23:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.735 07:23:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.735 07:23:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.735 07:23:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.735 07:23:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.735 07:23:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.735 07:23:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.735 07:23:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:13:45.735 07:23:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:13:45.735 07:23:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.735 07:23:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.735 07:23:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.735 07:23:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.735 07:23:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.735 07:23:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.735 07:23:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.735 07:23:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.735 07:23:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.735 07:23:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.735 07:23:07 -- paths/export.sh@5 -- # export PATH 00:13:45.735 07:23:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.735 07:23:07 -- nvmf/common.sh@46 -- # : 0 00:13:45.735 07:23:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:45.735 07:23:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:45.735 07:23:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:45.735 07:23:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.735 07:23:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.735 07:23:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:45.735 07:23:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:45.735 07:23:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:45.735 07:23:07 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:13:45.735 07:23:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:45.735 07:23:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.735 07:23:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:45.735 07:23:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:45.735 07:23:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:45.735 07:23:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.735 07:23:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.735 07:23:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.735 07:23:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:45.735 07:23:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:45.735 07:23:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:45.735 07:23:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:45.735 07:23:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:45.735 07:23:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:45.735 07:23:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.735 07:23:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.735 07:23:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:45.735 07:23:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:45.735 07:23:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:45.735 07:23:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:45.735 07:23:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:45.735 07:23:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.735 07:23:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:45.735 07:23:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:45.735 07:23:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:45.735 07:23:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:45.735 07:23:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:45.735 07:23:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:45.735 Cannot find device "nvmf_tgt_br" 00:13:45.735 07:23:07 -- nvmf/common.sh@154 -- # true 00:13:45.735 07:23:07 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:45.735 Cannot find device "nvmf_tgt_br2" 00:13:45.735 07:23:07 -- nvmf/common.sh@155 -- # true 00:13:45.735 07:23:07 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:45.735 07:23:07 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:45.735 Cannot find device "nvmf_tgt_br" 00:13:45.735 07:23:07 -- nvmf/common.sh@157 -- # true 00:13:45.735 07:23:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:45.735 Cannot find device "nvmf_tgt_br2" 00:13:45.735 07:23:08 -- nvmf/common.sh@158 -- # true 00:13:45.735 07:23:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:45.994 07:23:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:45.995 07:23:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:45.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.995 07:23:08 -- nvmf/common.sh@161 -- # true 00:13:45.995 07:23:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:45.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:45.995 07:23:08 -- nvmf/common.sh@162 -- # true 00:13:45.995 07:23:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:45.995 07:23:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:45.995 07:23:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:45.995 07:23:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:45.995 07:23:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:45.995 07:23:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:45.995 07:23:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:45.995 07:23:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:45.995 07:23:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:45.995 07:23:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:45.995 07:23:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:45.995 07:23:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:45.995 07:23:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:45.995 07:23:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:45.995 07:23:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:45.995 07:23:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:45.995 07:23:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:45.995 07:23:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:45.995 07:23:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:45.995 07:23:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:45.995 07:23:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:45.995 07:23:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:45.995 07:23:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:45.995 07:23:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:46.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:13:46.254 00:13:46.254 --- 10.0.0.2 ping statistics --- 00:13:46.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.254 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:46.254 07:23:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:46.254 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:46.254 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:13:46.254 00:13:46.254 --- 10.0.0.3 ping statistics --- 00:13:46.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.254 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:13:46.254 07:23:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:46.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:13:46.254 00:13:46.254 --- 10.0.0.1 ping statistics --- 00:13:46.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.254 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:13:46.254 07:23:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.254 07:23:08 -- nvmf/common.sh@421 -- # return 0 00:13:46.254 07:23:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:46.254 07:23:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.254 07:23:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:46.254 07:23:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:46.254 07:23:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.254 07:23:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:46.254 07:23:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:46.254 07:23:08 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78657 00:13:46.254 07:23:08 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:46.254 07:23:08 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:46.254 07:23:08 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78657 00:13:46.254 07:23:08 -- common/autotest_common.sh@829 -- # '[' -z 78657 ']' 00:13:46.254 07:23:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.254 07:23:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:46.254 07:23:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.254 07:23:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:46.254 07:23:08 -- common/autotest_common.sh@10 -- # set +x 00:13:47.191 07:23:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:47.191 07:23:09 -- common/autotest_common.sh@862 -- # return 0 00:13:47.191 07:23:09 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.191 07:23:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.191 07:23:09 -- common/autotest_common.sh@10 -- # set +x 00:13:47.191 07:23:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.191 07:23:09 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:13:47.191 07:23:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.191 07:23:09 -- common/autotest_common.sh@10 -- # set +x 00:13:47.450 Malloc0 00:13:47.451 07:23:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.451 07:23:09 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:47.451 07:23:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.451 07:23:09 -- common/autotest_common.sh@10 -- # set +x 00:13:47.451 07:23:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.451 07:23:09 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:47.451 07:23:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.451 07:23:09 -- common/autotest_common.sh@10 -- # set +x 00:13:47.451 07:23:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.451 07:23:09 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.451 07:23:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.451 07:23:09 -- common/autotest_common.sh@10 -- # set +x 00:13:47.451 07:23:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.451 07:23:09 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:13:47.451 07:23:09 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:13:47.710 Shutting down the fuzz application 00:13:47.710 07:23:09 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:13:47.969 Shutting down the fuzz application 00:13:47.969 07:23:10 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.969 07:23:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.969 07:23:10 -- common/autotest_common.sh@10 -- # set +x 00:13:47.969 07:23:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.969 07:23:10 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:47.969 07:23:10 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:13:47.969 07:23:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:47.969 07:23:10 -- nvmf/common.sh@116 -- # sync 00:13:48.229 07:23:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:48.229 07:23:10 -- nvmf/common.sh@119 -- # set +e 00:13:48.229 07:23:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:48.229 07:23:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:48.229 rmmod nvme_tcp 00:13:48.229 rmmod nvme_fabrics 00:13:48.229 rmmod nvme_keyring 00:13:48.229 07:23:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:48.229 07:23:10 -- nvmf/common.sh@123 -- # set -e 00:13:48.229 07:23:10 -- nvmf/common.sh@124 -- # return 0 00:13:48.229 07:23:10 -- nvmf/common.sh@477 -- # '[' -n 78657 ']' 00:13:48.229 07:23:10 -- nvmf/common.sh@478 -- # killprocess 78657 00:13:48.229 07:23:10 -- common/autotest_common.sh@936 -- # '[' -z 78657 ']' 00:13:48.229 07:23:10 -- common/autotest_common.sh@940 -- # kill -0 78657 00:13:48.229 07:23:10 -- common/autotest_common.sh@941 -- # uname 00:13:48.229 07:23:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:48.229 07:23:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78657 00:13:48.229 07:23:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:48.229 killing process with pid 78657 00:13:48.229 07:23:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:48.229 07:23:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78657' 00:13:48.229 07:23:10 -- common/autotest_common.sh@955 -- # kill 78657 00:13:48.229 07:23:10 -- common/autotest_common.sh@960 -- # wait 78657 00:13:48.489 07:23:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:48.489 07:23:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:48.489 07:23:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:48.489 07:23:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.489 07:23:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:48.489 07:23:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.489 07:23:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.489 07:23:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.489 07:23:10 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:48.489 07:23:10 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:13:48.489 00:13:48.489 real 0m3.038s 00:13:48.489 user 0m3.197s 00:13:48.489 sys 0m0.723s 00:13:48.489 07:23:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:48.489 ************************************ 00:13:48.489 END TEST nvmf_fuzz 00:13:48.489 07:23:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.489 ************************************ 00:13:48.749 07:23:10 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:48.749 07:23:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:48.749 07:23:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.749 07:23:10 -- common/autotest_common.sh@10 -- # set +x 00:13:48.749 ************************************ 00:13:48.749 START TEST nvmf_multiconnection 00:13:48.749 ************************************ 00:13:48.749 07:23:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:48.749 * Looking for test storage... 00:13:48.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.749 07:23:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:48.749 07:23:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:48.749 07:23:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:48.749 07:23:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:48.749 07:23:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:48.749 07:23:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:48.749 07:23:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:48.749 07:23:10 -- scripts/common.sh@335 -- # IFS=.-: 00:13:48.749 07:23:10 -- scripts/common.sh@335 -- # read -ra ver1 00:13:48.749 07:23:10 -- scripts/common.sh@336 -- # IFS=.-: 00:13:48.749 07:23:10 -- scripts/common.sh@336 -- # read -ra ver2 00:13:48.749 07:23:10 -- scripts/common.sh@337 -- # local 'op=<' 00:13:48.749 07:23:10 -- scripts/common.sh@339 -- # ver1_l=2 00:13:48.749 07:23:10 -- scripts/common.sh@340 -- # ver2_l=1 00:13:48.749 07:23:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:48.749 07:23:10 -- scripts/common.sh@343 -- # case "$op" in 00:13:48.749 07:23:10 -- scripts/common.sh@344 -- # : 1 00:13:48.749 07:23:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:48.749 07:23:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:48.749 07:23:10 -- scripts/common.sh@364 -- # decimal 1 00:13:48.749 07:23:10 -- scripts/common.sh@352 -- # local d=1 00:13:48.749 07:23:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:48.749 07:23:10 -- scripts/common.sh@354 -- # echo 1 00:13:48.749 07:23:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:48.749 07:23:10 -- scripts/common.sh@365 -- # decimal 2 00:13:48.749 07:23:10 -- scripts/common.sh@352 -- # local d=2 00:13:48.749 07:23:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:48.749 07:23:10 -- scripts/common.sh@354 -- # echo 2 00:13:48.749 07:23:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:48.749 07:23:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:48.749 07:23:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:48.749 07:23:10 -- scripts/common.sh@367 -- # return 0 00:13:48.749 07:23:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:48.749 07:23:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:48.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.749 --rc genhtml_branch_coverage=1 00:13:48.749 --rc genhtml_function_coverage=1 00:13:48.749 --rc genhtml_legend=1 00:13:48.749 --rc geninfo_all_blocks=1 00:13:48.749 --rc geninfo_unexecuted_blocks=1 00:13:48.749 00:13:48.749 ' 00:13:48.749 07:23:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:48.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.749 --rc genhtml_branch_coverage=1 00:13:48.749 --rc genhtml_function_coverage=1 00:13:48.749 --rc genhtml_legend=1 00:13:48.749 --rc geninfo_all_blocks=1 00:13:48.749 --rc geninfo_unexecuted_blocks=1 00:13:48.749 00:13:48.749 ' 00:13:48.749 07:23:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:48.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.749 --rc genhtml_branch_coverage=1 00:13:48.749 --rc genhtml_function_coverage=1 00:13:48.749 --rc genhtml_legend=1 00:13:48.749 --rc geninfo_all_blocks=1 00:13:48.749 --rc geninfo_unexecuted_blocks=1 00:13:48.749 00:13:48.749 ' 00:13:48.749 07:23:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:48.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:48.749 --rc genhtml_branch_coverage=1 00:13:48.749 --rc genhtml_function_coverage=1 00:13:48.749 --rc genhtml_legend=1 00:13:48.749 --rc geninfo_all_blocks=1 00:13:48.749 --rc geninfo_unexecuted_blocks=1 00:13:48.749 00:13:48.749 ' 00:13:48.749 07:23:10 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.749 07:23:10 -- nvmf/common.sh@7 -- # uname -s 00:13:48.749 07:23:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.749 07:23:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.749 07:23:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.749 07:23:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.749 07:23:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.749 07:23:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.749 07:23:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.749 07:23:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.749 07:23:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.749 07:23:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.749 07:23:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:13:48.749 07:23:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:13:48.749 07:23:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.749 07:23:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.749 07:23:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.749 07:23:10 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.749 07:23:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.749 07:23:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.749 07:23:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.749 07:23:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.749 07:23:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.749 07:23:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.749 07:23:10 -- paths/export.sh@5 -- # export PATH 00:13:48.749 07:23:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.749 07:23:10 -- nvmf/common.sh@46 -- # : 0 00:13:48.749 07:23:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:48.749 07:23:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:48.749 07:23:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:48.749 07:23:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.749 07:23:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.749 07:23:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:48.749 07:23:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:48.749 07:23:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:48.749 07:23:10 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.749 07:23:10 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.749 07:23:10 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:13:48.749 07:23:10 -- target/multiconnection.sh@16 -- # nvmftestinit 00:13:48.749 07:23:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:48.749 07:23:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.749 07:23:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:48.749 07:23:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:48.749 07:23:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:48.749 07:23:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.749 07:23:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.749 07:23:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.749 07:23:10 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:48.749 07:23:10 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:48.749 07:23:10 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:48.750 07:23:10 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:48.750 07:23:10 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:48.750 07:23:10 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:48.750 07:23:10 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.750 07:23:10 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.750 07:23:10 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.750 07:23:10 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:48.750 07:23:10 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.750 07:23:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.750 07:23:10 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.750 07:23:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.750 07:23:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.750 07:23:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.750 07:23:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.750 07:23:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.750 07:23:10 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:48.750 07:23:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:49.009 Cannot find device "nvmf_tgt_br" 00:13:49.009 07:23:11 -- nvmf/common.sh@154 -- # true 00:13:49.009 07:23:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:49.009 Cannot find device "nvmf_tgt_br2" 00:13:49.009 07:23:11 -- nvmf/common.sh@155 -- # true 00:13:49.009 07:23:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:49.009 07:23:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:49.009 Cannot find device "nvmf_tgt_br" 00:13:49.009 07:23:11 -- nvmf/common.sh@157 -- # true 00:13:49.009 07:23:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:49.009 Cannot find device "nvmf_tgt_br2" 00:13:49.009 07:23:11 -- nvmf/common.sh@158 -- # true 00:13:49.009 07:23:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:49.009 07:23:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:49.009 07:23:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.009 07:23:11 -- nvmf/common.sh@161 -- # true 00:13:49.009 07:23:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.009 07:23:11 -- nvmf/common.sh@162 -- # true 00:13:49.009 07:23:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:49.009 07:23:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:49.009 07:23:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:49.009 07:23:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:49.009 07:23:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:49.009 07:23:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.009 07:23:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.009 07:23:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:49.009 07:23:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:49.009 07:23:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:49.009 07:23:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:49.009 07:23:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:49.009 07:23:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:49.009 07:23:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.009 07:23:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.009 07:23:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.269 07:23:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:49.269 07:23:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:49.269 07:23:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.269 07:23:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.269 07:23:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.269 07:23:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.269 07:23:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.269 07:23:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:49.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:13:49.269 00:13:49.269 --- 10.0.0.2 ping statistics --- 00:13:49.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.269 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:13:49.269 07:23:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:49.269 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.269 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:13:49.269 00:13:49.269 --- 10.0.0.3 ping statistics --- 00:13:49.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.269 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:49.269 07:23:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:49.269 00:13:49.269 --- 10.0.0.1 ping statistics --- 00:13:49.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.269 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:49.269 07:23:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.269 07:23:11 -- nvmf/common.sh@421 -- # return 0 00:13:49.269 07:23:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:49.269 07:23:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.269 07:23:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:49.269 07:23:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:49.269 07:23:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.269 07:23:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:49.269 07:23:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:49.269 07:23:11 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:13:49.269 07:23:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:49.269 07:23:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:49.269 07:23:11 -- common/autotest_common.sh@10 -- # set +x 00:13:49.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.269 07:23:11 -- nvmf/common.sh@469 -- # nvmfpid=78859 00:13:49.269 07:23:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:49.269 07:23:11 -- nvmf/common.sh@470 -- # waitforlisten 78859 00:13:49.269 07:23:11 -- common/autotest_common.sh@829 -- # '[' -z 78859 ']' 00:13:49.269 07:23:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.269 07:23:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.269 07:23:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.269 07:23:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.269 07:23:11 -- common/autotest_common.sh@10 -- # set +x 00:13:49.269 [2024-11-28 07:23:11.441073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:49.269 [2024-11-28 07:23:11.441377] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.529 [2024-11-28 07:23:11.583845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.529 [2024-11-28 07:23:11.670812] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:49.529 [2024-11-28 07:23:11.671276] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.529 [2024-11-28 07:23:11.671490] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.529 [2024-11-28 07:23:11.671708] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.529 [2024-11-28 07:23:11.672080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.529 [2024-11-28 07:23:11.672276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.529 [2024-11-28 07:23:11.672285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.529 [2024-11-28 07:23:11.672136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.467 07:23:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.467 07:23:12 -- common/autotest_common.sh@862 -- # return 0 00:13:50.467 07:23:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:50.467 07:23:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 07:23:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.467 07:23:12 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 [2024-11-28 07:23:12.503114] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@21 -- # seq 1 11 00:13:50.467 07:23:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.467 07:23:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 Malloc1 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 [2024-11-28 07:23:12.599411] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.467 07:23:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 Malloc2 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.467 07:23:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 Malloc3 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.467 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.467 07:23:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.467 07:23:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:13:50.467 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.467 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 Malloc4 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.727 07:23:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 Malloc5 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.727 07:23:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 Malloc6 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.727 07:23:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 Malloc7 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:13:50.727 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.727 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.727 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.727 07:23:12 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:13:50.728 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.728 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.728 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.728 07:23:12 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:13:50.728 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.728 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.728 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.728 07:23:12 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.728 07:23:12 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:13:50.728 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.728 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.728 Malloc8 00:13:50.728 07:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.728 07:23:12 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:13:50.728 07:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.728 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:50.987 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.987 07:23:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:13:50.987 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.987 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.987 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.987 07:23:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:13:50.987 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.987 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.987 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.987 07:23:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.987 07:23:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:13:50.987 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.987 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.987 Malloc9 00:13:50.987 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.987 07:23:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:13:50.987 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.987 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.987 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.987 07:23:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:13:50.987 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.987 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.987 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.987 07:23:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:13:50.987 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.987 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.987 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.987 07:23:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.987 07:23:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:13:50.987 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.987 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.987 Malloc10 00:13:50.988 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.988 07:23:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:13:50.988 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.988 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.988 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.988 07:23:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:13:50.988 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.988 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.988 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.988 07:23:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:13:50.988 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.988 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.988 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.988 07:23:13 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.988 07:23:13 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:13:50.988 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.988 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.988 Malloc11 00:13:50.988 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.988 07:23:13 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:13:50.988 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.988 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.988 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.988 07:23:13 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:13:50.988 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.988 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.988 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.988 07:23:13 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:13:50.988 07:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.988 07:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:50.988 07:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.988 07:23:13 -- target/multiconnection.sh@28 -- # seq 1 11 00:13:50.988 07:23:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:50.988 07:23:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:51.247 07:23:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:13:51.247 07:23:13 -- common/autotest_common.sh@1187 -- # local i=0 00:13:51.247 07:23:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.247 07:23:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:51.247 07:23:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:53.152 07:23:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:53.152 07:23:15 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:13:53.152 07:23:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:53.152 07:23:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:53.152 07:23:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.152 07:23:15 -- common/autotest_common.sh@1197 -- # return 0 00:13:53.152 07:23:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:53.152 07:23:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:13:53.411 07:23:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:13:53.411 07:23:15 -- common/autotest_common.sh@1187 -- # local i=0 00:13:53.411 07:23:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.411 07:23:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:53.411 07:23:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:55.316 07:23:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:55.316 07:23:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:55.317 07:23:17 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:13:55.317 07:23:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:55.317 07:23:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.317 07:23:17 -- common/autotest_common.sh@1197 -- # return 0 00:13:55.317 07:23:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:55.317 07:23:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:13:55.575 07:23:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:13:55.575 07:23:17 -- common/autotest_common.sh@1187 -- # local i=0 00:13:55.575 07:23:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:55.575 07:23:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:55.575 07:23:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:57.480 07:23:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:57.480 07:23:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:57.480 07:23:19 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:13:57.480 07:23:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:57.480 07:23:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.480 07:23:19 -- common/autotest_common.sh@1197 -- # return 0 00:13:57.480 07:23:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:57.480 07:23:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:13:57.744 07:23:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:13:57.744 07:23:19 -- common/autotest_common.sh@1187 -- # local i=0 00:13:57.744 07:23:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:57.744 07:23:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:57.744 07:23:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:59.648 07:23:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:59.648 07:23:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:59.648 07:23:21 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:13:59.648 07:23:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:59.648 07:23:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:59.648 07:23:21 -- common/autotest_common.sh@1197 -- # return 0 00:13:59.648 07:23:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:59.649 07:23:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:13:59.907 07:23:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:13:59.907 07:23:21 -- common/autotest_common.sh@1187 -- # local i=0 00:13:59.907 07:23:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.907 07:23:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:59.907 07:23:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:01.812 07:23:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:01.812 07:23:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:01.812 07:23:23 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:14:01.812 07:23:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:01.812 07:23:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.812 07:23:23 -- common/autotest_common.sh@1197 -- # return 0 00:14:01.812 07:23:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:01.812 07:23:23 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:14:02.071 07:23:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:14:02.071 07:23:24 -- common/autotest_common.sh@1187 -- # local i=0 00:14:02.071 07:23:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.071 07:23:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:02.071 07:23:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:03.986 07:23:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:03.986 07:23:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:03.986 07:23:26 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:14:03.986 07:23:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:03.986 07:23:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.986 07:23:26 -- common/autotest_common.sh@1197 -- # return 0 00:14:03.986 07:23:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:03.986 07:23:26 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:14:04.245 07:23:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:14:04.245 07:23:26 -- common/autotest_common.sh@1187 -- # local i=0 00:14:04.245 07:23:26 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.245 07:23:26 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:04.245 07:23:26 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:06.150 07:23:28 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:06.150 07:23:28 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:06.150 07:23:28 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:14:06.151 07:23:28 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:06.151 07:23:28 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.151 07:23:28 -- common/autotest_common.sh@1197 -- # return 0 00:14:06.151 07:23:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:06.151 07:23:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:14:06.410 07:23:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:14:06.410 07:23:28 -- common/autotest_common.sh@1187 -- # local i=0 00:14:06.410 07:23:28 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.410 07:23:28 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:06.410 07:23:28 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:08.315 07:23:30 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:08.315 07:23:30 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:08.315 07:23:30 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:14:08.315 07:23:30 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:08.315 07:23:30 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.315 07:23:30 -- common/autotest_common.sh@1197 -- # return 0 00:14:08.315 07:23:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:08.315 07:23:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:14:08.574 07:23:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:14:08.574 07:23:30 -- common/autotest_common.sh@1187 -- # local i=0 00:14:08.574 07:23:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.574 07:23:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:08.574 07:23:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:10.479 07:23:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:10.479 07:23:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:10.479 07:23:32 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:14:10.479 07:23:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:10.479 07:23:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.479 07:23:32 -- common/autotest_common.sh@1197 -- # return 0 00:14:10.479 07:23:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:10.480 07:23:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:14:10.739 07:23:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:14:10.739 07:23:32 -- common/autotest_common.sh@1187 -- # local i=0 00:14:10.739 07:23:32 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.739 07:23:32 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:10.739 07:23:32 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:12.646 07:23:34 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:12.646 07:23:34 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:14:12.646 07:23:34 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:12.646 07:23:34 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:12.646 07:23:34 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.646 07:23:34 -- common/autotest_common.sh@1197 -- # return 0 00:14:12.646 07:23:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:12.646 07:23:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:14:12.905 07:23:35 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:14:12.905 07:23:35 -- common/autotest_common.sh@1187 -- # local i=0 00:14:12.905 07:23:35 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.905 07:23:35 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:12.905 07:23:35 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:14.809 07:23:37 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:14.809 07:23:37 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:14.809 07:23:37 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:14:14.809 07:23:37 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:14.809 07:23:37 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.809 07:23:37 -- common/autotest_common.sh@1197 -- # return 0 00:14:14.809 07:23:37 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:14:14.809 [global] 00:14:14.809 thread=1 00:14:14.809 invalidate=1 00:14:14.809 rw=read 00:14:14.809 time_based=1 00:14:14.809 runtime=10 00:14:14.809 ioengine=libaio 00:14:14.809 direct=1 00:14:14.809 bs=262144 00:14:14.809 iodepth=64 00:14:14.809 norandommap=1 00:14:14.809 numjobs=1 00:14:14.809 00:14:14.809 [job0] 00:14:14.809 filename=/dev/nvme0n1 00:14:15.069 [job1] 00:14:15.069 filename=/dev/nvme10n1 00:14:15.069 [job2] 00:14:15.069 filename=/dev/nvme1n1 00:14:15.069 [job3] 00:14:15.069 filename=/dev/nvme2n1 00:14:15.069 [job4] 00:14:15.069 filename=/dev/nvme3n1 00:14:15.069 [job5] 00:14:15.069 filename=/dev/nvme4n1 00:14:15.069 [job6] 00:14:15.069 filename=/dev/nvme5n1 00:14:15.069 [job7] 00:14:15.069 filename=/dev/nvme6n1 00:14:15.069 [job8] 00:14:15.069 filename=/dev/nvme7n1 00:14:15.069 [job9] 00:14:15.069 filename=/dev/nvme8n1 00:14:15.069 [job10] 00:14:15.069 filename=/dev/nvme9n1 00:14:15.069 Could not set queue depth (nvme0n1) 00:14:15.069 Could not set queue depth (nvme10n1) 00:14:15.069 Could not set queue depth (nvme1n1) 00:14:15.069 Could not set queue depth (nvme2n1) 00:14:15.069 Could not set queue depth (nvme3n1) 00:14:15.069 Could not set queue depth (nvme4n1) 00:14:15.069 Could not set queue depth (nvme5n1) 00:14:15.069 Could not set queue depth (nvme6n1) 00:14:15.069 Could not set queue depth (nvme7n1) 00:14:15.069 Could not set queue depth (nvme8n1) 00:14:15.069 Could not set queue depth (nvme9n1) 00:14:15.328 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:15.328 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:15.328 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:15.328 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:15.328 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:15.328 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:15.328 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:15.328 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:15.328 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:15.328 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:15.328 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:15.328 fio-3.35 00:14:15.328 Starting 11 threads 00:14:27.594 00:14:27.594 job0: (groupid=0, jobs=1): err= 0: pid=79322: Thu Nov 28 07:23:47 2024 00:14:27.594 read: IOPS=329, BW=82.4MiB/s (86.4MB/s)(836MiB/10137msec) 00:14:27.594 slat (usec): min=21, max=163148, avg=2994.49, stdev=8001.75 00:14:27.594 clat (msec): min=79, max=369, avg=190.91, stdev=27.20 00:14:27.594 lat (msec): min=79, max=410, avg=193.90, stdev=28.15 00:14:27.594 clat percentiles (msec): 00:14:27.594 | 1.00th=[ 125], 5.00th=[ 167], 10.00th=[ 171], 20.00th=[ 176], 00:14:27.594 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:14:27.594 | 70.00th=[ 194], 80.00th=[ 203], 90.00th=[ 226], 95.00th=[ 249], 00:14:27.594 | 99.00th=[ 284], 99.50th=[ 288], 99.90th=[ 326], 99.95th=[ 330], 00:14:27.594 | 99.99th=[ 372] 00:14:27.594 bw ( KiB/s): min=63872, max=93696, per=6.34%, avg=84130.37, stdev=9507.36, samples=19 00:14:27.594 iops : min= 249, max= 366, avg=328.47, stdev=37.22, samples=19 00:14:27.594 lat (msec) : 100=0.51%, 250=94.55%, 500=4.94% 00:14:27.594 cpu : usr=0.22%, sys=1.60%, ctx=821, majf=0, minf=4097 00:14:27.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:27.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.594 issued rwts: total=3342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.594 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.594 job1: (groupid=0, jobs=1): err= 0: pid=79324: Thu Nov 28 07:23:47 2024 00:14:27.594 read: IOPS=634, BW=159MiB/s (166MB/s)(1590MiB/10016msec) 00:14:27.594 slat (usec): min=22, max=46691, avg=1567.69, stdev=3655.36 00:14:27.594 clat (msec): min=12, max=143, avg=99.12, stdev=22.48 00:14:27.594 lat (msec): min=18, max=147, avg=100.69, stdev=22.73 00:14:27.594 clat percentiles (msec): 00:14:27.594 | 1.00th=[ 54], 5.00th=[ 65], 10.00th=[ 69], 20.00th=[ 73], 00:14:27.594 | 30.00th=[ 82], 40.00th=[ 101], 50.00th=[ 106], 60.00th=[ 111], 00:14:27.594 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 130], 00:14:27.594 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:14:27.594 | 99.99th=[ 144] 00:14:27.594 bw ( KiB/s): min=130308, max=227328, per=12.15%, avg=161207.35, stdev=34285.86, samples=20 00:14:27.594 iops : min= 509, max= 888, avg=629.60, stdev=133.86, samples=20 00:14:27.594 lat (msec) : 20=0.06%, 50=0.83%, 100=39.59%, 250=59.52% 00:14:27.594 cpu : usr=0.50%, sys=2.76%, ctx=1321, majf=0, minf=4097 00:14:27.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:27.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.594 issued rwts: total=6358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.594 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.594 job2: (groupid=0, jobs=1): err= 0: pid=79325: Thu Nov 28 07:23:47 2024 00:14:27.594 read: IOPS=326, BW=81.7MiB/s (85.7MB/s)(828MiB/10136msec) 00:14:27.594 slat (usec): min=22, max=153139, avg=3024.77, stdev=8574.81 00:14:27.594 clat (msec): min=94, max=373, avg=192.48, stdev=24.92 00:14:27.594 lat (msec): min=104, max=399, avg=195.50, stdev=26.01 00:14:27.594 clat percentiles (msec): 00:14:27.594 | 1.00th=[ 155], 5.00th=[ 169], 10.00th=[ 171], 20.00th=[ 178], 00:14:27.594 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:14:27.594 | 70.00th=[ 197], 80.00th=[ 203], 90.00th=[ 224], 95.00th=[ 247], 00:14:27.594 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 317], 99.95th=[ 355], 00:14:27.594 | 99.99th=[ 372] 00:14:27.594 bw ( KiB/s): min=63615, max=95744, per=6.30%, avg=83566.16, stdev=10228.23, samples=19 00:14:27.594 iops : min= 248, max= 374, avg=326.21, stdev=40.10, samples=19 00:14:27.594 lat (msec) : 100=0.03%, 250=95.86%, 500=4.11% 00:14:27.594 cpu : usr=0.22%, sys=1.55%, ctx=829, majf=0, minf=4097 00:14:27.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:27.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.594 issued rwts: total=3313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.594 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.594 job3: (groupid=0, jobs=1): err= 0: pid=79326: Thu Nov 28 07:23:47 2024 00:14:27.594 read: IOPS=604, BW=151MiB/s (159MB/s)(1522MiB/10065msec) 00:14:27.594 slat (usec): min=21, max=39775, avg=1638.56, stdev=3762.83 00:14:27.594 clat (msec): min=20, max=168, avg=104.03, stdev=19.89 00:14:27.594 lat (msec): min=21, max=168, avg=105.67, stdev=20.12 00:14:27.594 clat percentiles (msec): 00:14:27.594 | 1.00th=[ 59], 5.00th=[ 73], 10.00th=[ 77], 20.00th=[ 82], 00:14:27.594 | 30.00th=[ 94], 40.00th=[ 104], 50.00th=[ 109], 60.00th=[ 113], 00:14:27.594 | 70.00th=[ 117], 80.00th=[ 122], 90.00th=[ 128], 95.00th=[ 133], 00:14:27.594 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 159], 99.95th=[ 159], 00:14:27.594 | 99.99th=[ 169] 00:14:27.595 bw ( KiB/s): min=124928, max=209408, per=11.62%, avg=154139.65, stdev=26737.62, samples=20 00:14:27.595 iops : min= 488, max= 818, avg=601.85, stdev=104.52, samples=20 00:14:27.595 lat (msec) : 50=0.48%, 100=35.60%, 250=63.92% 00:14:27.595 cpu : usr=0.32%, sys=2.22%, ctx=1299, majf=0, minf=4097 00:14:27.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:27.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.595 issued rwts: total=6087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.595 job4: (groupid=0, jobs=1): err= 0: pid=79327: Thu Nov 28 07:23:47 2024 00:14:27.595 read: IOPS=364, BW=91.0MiB/s (95.5MB/s)(923MiB/10139msec) 00:14:27.595 slat (usec): min=21, max=200115, avg=2614.67, stdev=7945.94 00:14:27.595 clat (msec): min=37, max=335, avg=172.91, stdev=48.09 00:14:27.595 lat (msec): min=37, max=437, avg=175.52, stdev=49.11 00:14:27.595 clat percentiles (msec): 00:14:27.595 | 1.00th=[ 48], 5.00th=[ 94], 10.00th=[ 105], 20.00th=[ 115], 00:14:27.595 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:14:27.595 | 70.00th=[ 190], 80.00th=[ 199], 90.00th=[ 213], 95.00th=[ 255], 00:14:27.595 | 99.00th=[ 292], 99.50th=[ 313], 99.90th=[ 334], 99.95th=[ 334], 00:14:27.595 | 99.99th=[ 334] 00:14:27.595 bw ( KiB/s): min=47104, max=166912, per=7.00%, avg=92893.50, stdev=25773.26, samples=20 00:14:27.595 iops : min= 184, max= 652, avg=362.75, stdev=100.71, samples=20 00:14:27.595 lat (msec) : 50=1.44%, 100=5.50%, 250=87.03%, 500=6.04% 00:14:27.595 cpu : usr=0.18%, sys=1.31%, ctx=913, majf=0, minf=4097 00:14:27.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:14:27.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.595 issued rwts: total=3692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.595 job5: (groupid=0, jobs=1): err= 0: pid=79328: Thu Nov 28 07:23:47 2024 00:14:27.595 read: IOPS=326, BW=81.6MiB/s (85.5MB/s)(827MiB/10140msec) 00:14:27.595 slat (usec): min=23, max=181497, avg=3025.31, stdev=8721.73 00:14:27.595 clat (msec): min=113, max=415, avg=192.78, stdev=24.74 00:14:27.595 lat (msec): min=141, max=415, avg=195.80, stdev=25.82 00:14:27.595 clat percentiles (msec): 00:14:27.595 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:14:27.595 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:14:27.595 | 70.00th=[ 194], 80.00th=[ 203], 90.00th=[ 224], 95.00th=[ 251], 00:14:27.595 | 99.00th=[ 271], 99.50th=[ 292], 99.90th=[ 342], 99.95th=[ 376], 00:14:27.595 | 99.99th=[ 418] 00:14:27.595 bw ( KiB/s): min=52841, max=92672, per=6.27%, avg=83261.89, stdev=10469.66, samples=19 00:14:27.595 iops : min= 206, max= 362, avg=325.05, stdev=40.95, samples=19 00:14:27.595 lat (msec) : 250=94.83%, 500=5.17% 00:14:27.595 cpu : usr=0.14%, sys=1.56%, ctx=803, majf=0, minf=4097 00:14:27.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:27.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.595 issued rwts: total=3308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.595 job6: (groupid=0, jobs=1): err= 0: pid=79329: Thu Nov 28 07:23:47 2024 00:14:27.595 read: IOPS=602, BW=151MiB/s (158MB/s)(1515MiB/10058msec) 00:14:27.595 slat (usec): min=20, max=26878, avg=1645.39, stdev=3733.27 00:14:27.595 clat (msec): min=36, max=165, avg=104.41, stdev=19.26 00:14:27.595 lat (msec): min=36, max=166, avg=106.05, stdev=19.51 00:14:27.595 clat percentiles (msec): 00:14:27.595 | 1.00th=[ 64], 5.00th=[ 72], 10.00th=[ 77], 20.00th=[ 83], 00:14:27.595 | 30.00th=[ 94], 40.00th=[ 105], 50.00th=[ 110], 60.00th=[ 113], 00:14:27.595 | 70.00th=[ 117], 80.00th=[ 121], 90.00th=[ 127], 95.00th=[ 131], 00:14:27.595 | 99.00th=[ 140], 99.50th=[ 142], 99.90th=[ 155], 99.95th=[ 161], 00:14:27.595 | 99.99th=[ 167] 00:14:27.595 bw ( KiB/s): min=127488, max=212480, per=11.61%, avg=154024.84, stdev=26720.34, samples=19 00:14:27.595 iops : min= 498, max= 830, avg=601.53, stdev=104.37, samples=19 00:14:27.595 lat (msec) : 50=0.30%, 100=33.67%, 250=66.03% 00:14:27.595 cpu : usr=0.32%, sys=2.80%, ctx=1299, majf=0, minf=4097 00:14:27.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:27.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.595 issued rwts: total=6061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.595 job7: (groupid=0, jobs=1): err= 0: pid=79330: Thu Nov 28 07:23:47 2024 00:14:27.595 read: IOPS=491, BW=123MiB/s (129MB/s)(1237MiB/10061msec) 00:14:27.595 slat (usec): min=21, max=183345, avg=1985.06, stdev=7046.66 00:14:27.595 clat (msec): min=9, max=434, avg=127.95, stdev=41.17 00:14:27.595 lat (msec): min=9, max=434, avg=129.94, stdev=42.09 00:14:27.595 clat percentiles (msec): 00:14:27.595 | 1.00th=[ 52], 5.00th=[ 94], 10.00th=[ 101], 20.00th=[ 106], 00:14:27.595 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 118], 60.00th=[ 122], 00:14:27.595 | 70.00th=[ 126], 80.00th=[ 132], 90.00th=[ 194], 95.00th=[ 239], 00:14:27.595 | 99.00th=[ 266], 99.50th=[ 271], 99.90th=[ 284], 99.95th=[ 347], 00:14:27.595 | 99.99th=[ 435] 00:14:27.595 bw ( KiB/s): min=45568, max=150227, per=9.35%, avg=124057.74, stdev=32290.15, samples=19 00:14:27.595 iops : min= 178, max= 586, avg=484.47, stdev=126.12, samples=19 00:14:27.595 lat (msec) : 10=0.02%, 20=0.26%, 50=0.53%, 100=9.86%, 250=86.70% 00:14:27.595 lat (msec) : 500=2.63% 00:14:27.595 cpu : usr=0.14%, sys=1.66%, ctx=1103, majf=0, minf=4097 00:14:27.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:14:27.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.595 issued rwts: total=4948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.595 job8: (groupid=0, jobs=1): err= 0: pid=79331: Thu Nov 28 07:23:47 2024 00:14:27.595 read: IOPS=322, BW=80.7MiB/s (84.6MB/s)(818MiB/10133msec) 00:14:27.595 slat (usec): min=20, max=213316, avg=3058.53, stdev=9936.97 00:14:27.595 clat (msec): min=111, max=425, avg=194.87, stdev=25.54 00:14:27.595 lat (msec): min=132, max=447, avg=197.93, stdev=27.01 00:14:27.595 clat percentiles (msec): 00:14:27.595 | 1.00th=[ 165], 5.00th=[ 171], 10.00th=[ 174], 20.00th=[ 178], 00:14:27.595 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:14:27.595 | 70.00th=[ 197], 80.00th=[ 207], 90.00th=[ 232], 95.00th=[ 255], 00:14:27.595 | 99.00th=[ 275], 99.50th=[ 305], 99.90th=[ 338], 99.95th=[ 393], 00:14:27.595 | 99.99th=[ 426] 00:14:27.595 bw ( KiB/s): min=43520, max=96768, per=6.21%, avg=82426.89, stdev=12393.85, samples=19 00:14:27.595 iops : min= 170, max= 378, avg=321.84, stdev=48.43, samples=19 00:14:27.595 lat (msec) : 250=93.92%, 500=6.08% 00:14:27.595 cpu : usr=0.18%, sys=1.07%, ctx=799, majf=0, minf=4097 00:14:27.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:27.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.595 issued rwts: total=3272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.595 job9: (groupid=0, jobs=1): err= 0: pid=79332: Thu Nov 28 07:23:47 2024 00:14:27.595 read: IOPS=577, BW=144MiB/s (151MB/s)(1466MiB/10150msec) 00:14:27.595 slat (usec): min=20, max=199278, avg=1677.21, stdev=6526.75 00:14:27.595 clat (msec): min=2, max=416, avg=108.91, stdev=59.16 00:14:27.595 lat (msec): min=2, max=441, avg=110.59, stdev=60.16 00:14:27.595 clat percentiles (msec): 00:14:27.595 | 1.00th=[ 13], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 40], 00:14:27.595 | 30.00th=[ 97], 40.00th=[ 107], 50.00th=[ 112], 60.00th=[ 116], 00:14:27.595 | 70.00th=[ 122], 80.00th=[ 129], 90.00th=[ 205], 95.00th=[ 226], 00:14:27.595 | 99.00th=[ 268], 99.50th=[ 271], 99.90th=[ 355], 99.95th=[ 409], 00:14:27.595 | 99.99th=[ 418] 00:14:27.595 bw ( KiB/s): min=62464, max=422400, per=11.19%, avg=148526.70, stdev=81300.69, samples=20 00:14:27.595 iops : min= 244, max= 1650, avg=580.10, stdev=317.61, samples=20 00:14:27.595 lat (msec) : 4=0.19%, 10=0.66%, 20=0.70%, 50=25.37%, 100=4.59% 00:14:27.595 lat (msec) : 250=65.75%, 500=2.75% 00:14:27.595 cpu : usr=0.34%, sys=1.92%, ctx=1276, majf=0, minf=4097 00:14:27.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:14:27.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.595 issued rwts: total=5865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.595 job10: (groupid=0, jobs=1): err= 0: pid=79333: Thu Nov 28 07:23:47 2024 00:14:27.595 read: IOPS=635, BW=159MiB/s (167MB/s)(1592MiB/10015msec) 00:14:27.595 slat (usec): min=22, max=31472, avg=1565.84, stdev=3588.80 00:14:27.595 clat (msec): min=12, max=153, avg=98.96, stdev=22.41 00:14:27.595 lat (msec): min=16, max=153, avg=100.53, stdev=22.66 00:14:27.595 clat percentiles (msec): 00:14:27.595 | 1.00th=[ 51], 5.00th=[ 66], 10.00th=[ 69], 20.00th=[ 73], 00:14:27.595 | 30.00th=[ 82], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 111], 00:14:27.595 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 125], 95.00th=[ 130], 00:14:27.595 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 148], 99.95th=[ 148], 00:14:27.595 | 99.99th=[ 155] 00:14:27.595 bw ( KiB/s): min=134412, max=227840, per=12.16%, avg=161392.85, stdev=34650.55, samples=20 00:14:27.595 iops : min= 525, max= 890, avg=630.35, stdev=135.32, samples=20 00:14:27.595 lat (msec) : 20=0.06%, 50=0.91%, 100=41.09%, 250=57.93% 00:14:27.595 cpu : usr=0.33%, sys=2.87%, ctx=1333, majf=0, minf=4097 00:14:27.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:14:27.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:27.595 issued rwts: total=6366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.595 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:27.595 00:14:27.595 Run status group 0 (all jobs): 00:14:27.596 READ: bw=1296MiB/s (1359MB/s), 80.7MiB/s-159MiB/s (84.6MB/s-167MB/s), io=12.8GiB (13.8GB), run=10015-10150msec 00:14:27.596 00:14:27.596 Disk stats (read/write): 00:14:27.596 nvme0n1: ios=6552/0, merge=0/0, ticks=1223686/0, in_queue=1223686, util=97.65% 00:14:27.596 nvme10n1: ios=12613/0, merge=0/0, ticks=1237986/0, in_queue=1237986, util=97.91% 00:14:27.596 nvme1n1: ios=6502/0, merge=0/0, ticks=1221798/0, in_queue=1221798, util=98.02% 00:14:27.596 nvme2n1: ios=12054/0, merge=0/0, ticks=1233643/0, in_queue=1233643, util=98.11% 00:14:27.596 nvme3n1: ios=7259/0, merge=0/0, ticks=1222467/0, in_queue=1222467, util=98.29% 00:14:27.596 nvme4n1: ios=6488/0, merge=0/0, ticks=1221193/0, in_queue=1221193, util=98.37% 00:14:27.596 nvme5n1: ios=11985/0, merge=0/0, ticks=1234171/0, in_queue=1234171, util=98.38% 00:14:27.596 nvme6n1: ios=9769/0, merge=0/0, ticks=1235086/0, in_queue=1235086, util=98.52% 00:14:27.596 nvme7n1: ios=6414/0, merge=0/0, ticks=1223527/0, in_queue=1223527, util=98.84% 00:14:27.596 nvme8n1: ios=11608/0, merge=0/0, ticks=1230626/0, in_queue=1230626, util=99.00% 00:14:27.596 nvme9n1: ios=12635/0, merge=0/0, ticks=1237507/0, in_queue=1237507, util=99.11% 00:14:27.596 07:23:47 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:14:27.596 [global] 00:14:27.596 thread=1 00:14:27.596 invalidate=1 00:14:27.596 rw=randwrite 00:14:27.596 time_based=1 00:14:27.596 runtime=10 00:14:27.596 ioengine=libaio 00:14:27.596 direct=1 00:14:27.596 bs=262144 00:14:27.596 iodepth=64 00:14:27.596 norandommap=1 00:14:27.596 numjobs=1 00:14:27.596 00:14:27.596 [job0] 00:14:27.596 filename=/dev/nvme0n1 00:14:27.596 [job1] 00:14:27.596 filename=/dev/nvme10n1 00:14:27.596 [job2] 00:14:27.596 filename=/dev/nvme1n1 00:14:27.596 [job3] 00:14:27.596 filename=/dev/nvme2n1 00:14:27.596 [job4] 00:14:27.596 filename=/dev/nvme3n1 00:14:27.596 [job5] 00:14:27.596 filename=/dev/nvme4n1 00:14:27.596 [job6] 00:14:27.596 filename=/dev/nvme5n1 00:14:27.596 [job7] 00:14:27.596 filename=/dev/nvme6n1 00:14:27.596 [job8] 00:14:27.596 filename=/dev/nvme7n1 00:14:27.596 [job9] 00:14:27.596 filename=/dev/nvme8n1 00:14:27.596 [job10] 00:14:27.596 filename=/dev/nvme9n1 00:14:27.596 Could not set queue depth (nvme0n1) 00:14:27.596 Could not set queue depth (nvme10n1) 00:14:27.596 Could not set queue depth (nvme1n1) 00:14:27.596 Could not set queue depth (nvme2n1) 00:14:27.596 Could not set queue depth (nvme3n1) 00:14:27.596 Could not set queue depth (nvme4n1) 00:14:27.596 Could not set queue depth (nvme5n1) 00:14:27.596 Could not set queue depth (nvme6n1) 00:14:27.596 Could not set queue depth (nvme7n1) 00:14:27.596 Could not set queue depth (nvme8n1) 00:14:27.596 Could not set queue depth (nvme9n1) 00:14:27.596 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:27.596 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:27.596 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:27.596 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:27.596 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:27.596 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:27.596 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:27.596 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:27.596 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:27.596 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:27.596 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:27.596 fio-3.35 00:14:27.596 Starting 11 threads 00:14:37.584 00:14:37.584 job0: (groupid=0, jobs=1): err= 0: pid=79533: Thu Nov 28 07:23:58 2024 00:14:37.584 write: IOPS=533, BW=133MiB/s (140MB/s)(1347MiB/10104msec); 0 zone resets 00:14:37.584 slat (usec): min=19, max=23138, avg=1850.03, stdev=3155.95 00:14:37.584 clat (msec): min=25, max=220, avg=118.10, stdev= 9.35 00:14:37.584 lat (msec): min=25, max=220, avg=119.95, stdev= 8.97 00:14:37.584 clat percentiles (msec): 00:14:37.584 | 1.00th=[ 99], 5.00th=[ 111], 10.00th=[ 112], 20.00th=[ 114], 00:14:37.584 | 30.00th=[ 117], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 121], 00:14:37.584 | 70.00th=[ 121], 80.00th=[ 122], 90.00th=[ 123], 95.00th=[ 124], 00:14:37.584 | 99.00th=[ 138], 99.50th=[ 169], 99.90th=[ 213], 99.95th=[ 215], 00:14:37.584 | 99.99th=[ 222] 00:14:37.584 bw ( KiB/s): min=131334, max=139776, per=11.81%, avg=136266.20, stdev=1929.01, samples=20 00:14:37.584 iops : min= 513, max= 546, avg=532.15, stdev= 7.44, samples=20 00:14:37.584 lat (msec) : 50=0.30%, 100=0.76%, 250=98.94% 00:14:37.584 cpu : usr=0.99%, sys=1.68%, ctx=7384, majf=0, minf=1 00:14:37.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:37.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:37.584 issued rwts: total=0,5387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.584 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.584 job1: (groupid=0, jobs=1): err= 0: pid=79534: Thu Nov 28 07:23:58 2024 00:14:37.584 write: IOPS=321, BW=80.3MiB/s (84.2MB/s)(817MiB/10174msec); 0 zone resets 00:14:37.584 slat (usec): min=17, max=64210, avg=3054.35, stdev=5410.79 00:14:37.584 clat (msec): min=35, max=376, avg=196.05, stdev=20.74 00:14:37.584 lat (msec): min=35, max=376, avg=199.10, stdev=20.32 00:14:37.584 clat percentiles (msec): 00:14:37.584 | 1.00th=[ 128], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:14:37.584 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 199], 00:14:37.584 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 218], 00:14:37.584 | 99.00th=[ 275], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 376], 00:14:37.584 | 99.99th=[ 376] 00:14:37.584 bw ( KiB/s): min=73728, max=86016, per=7.11%, avg=82032.40, stdev=2679.38, samples=20 00:14:37.584 iops : min= 288, max= 336, avg=320.35, stdev=10.46, samples=20 00:14:37.584 lat (msec) : 50=0.24%, 100=0.49%, 250=98.10%, 500=1.16% 00:14:37.584 cpu : usr=0.95%, sys=1.07%, ctx=3795, majf=0, minf=1 00:14:37.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:37.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:37.584 issued rwts: total=0,3269,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.584 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.585 job2: (groupid=0, jobs=1): err= 0: pid=79546: Thu Nov 28 07:23:58 2024 00:14:37.585 write: IOPS=323, BW=80.9MiB/s (84.8MB/s)(823MiB/10179msec); 0 zone resets 00:14:37.585 slat (usec): min=25, max=18667, avg=3033.68, stdev=5239.97 00:14:37.585 clat (msec): min=23, max=376, avg=194.75, stdev=23.30 00:14:37.585 lat (msec): min=23, max=376, avg=197.78, stdev=23.05 00:14:37.585 clat percentiles (msec): 00:14:37.585 | 1.00th=[ 79], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 188], 00:14:37.585 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 199], 00:14:37.585 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 205], 95.00th=[ 207], 00:14:37.585 | 99.00th=[ 275], 99.50th=[ 326], 99.90th=[ 368], 99.95th=[ 376], 00:14:37.585 | 99.99th=[ 376] 00:14:37.585 bw ( KiB/s): min=78336, max=90112, per=7.16%, avg=82629.20, stdev=2463.31, samples=20 00:14:37.585 iops : min= 306, max= 352, avg=322.70, stdev= 9.62, samples=20 00:14:37.585 lat (msec) : 50=0.61%, 100=0.73%, 250=97.51%, 500=1.15% 00:14:37.585 cpu : usr=0.82%, sys=1.12%, ctx=3350, majf=0, minf=1 00:14:37.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:37.585 issued rwts: total=0,3292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.585 job3: (groupid=0, jobs=1): err= 0: pid=79547: Thu Nov 28 07:23:58 2024 00:14:37.585 write: IOPS=321, BW=80.4MiB/s (84.3MB/s)(818MiB/10180msec); 0 zone resets 00:14:37.585 slat (usec): min=26, max=64996, avg=3050.32, stdev=5363.45 00:14:37.585 clat (msec): min=10, max=383, avg=195.97, stdev=24.50 00:14:37.585 lat (msec): min=10, max=383, avg=199.02, stdev=24.26 00:14:37.585 clat percentiles (msec): 00:14:37.585 | 1.00th=[ 71], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:14:37.585 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 199], 60.00th=[ 199], 00:14:37.585 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 211], 00:14:37.585 | 99.00th=[ 279], 99.50th=[ 334], 99.90th=[ 372], 99.95th=[ 384], 00:14:37.585 | 99.99th=[ 384] 00:14:37.585 bw ( KiB/s): min=77824, max=86016, per=7.11%, avg=82108.85, stdev=2323.12, samples=20 00:14:37.585 iops : min= 304, max= 336, avg=320.65, stdev= 9.05, samples=20 00:14:37.585 lat (msec) : 20=0.24%, 50=0.37%, 100=0.86%, 250=97.25%, 500=1.28% 00:14:37.585 cpu : usr=0.82%, sys=1.21%, ctx=2904, majf=0, minf=1 00:14:37.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:37.585 issued rwts: total=0,3272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.585 job4: (groupid=0, jobs=1): err= 0: pid=79548: Thu Nov 28 07:23:58 2024 00:14:37.585 write: IOPS=532, BW=133MiB/s (139MB/s)(1344MiB/10106msec); 0 zone resets 00:14:37.585 slat (usec): min=17, max=50396, avg=1855.30, stdev=3215.69 00:14:37.585 clat (msec): min=12, max=219, avg=118.37, stdev= 9.48 00:14:37.585 lat (msec): min=12, max=219, avg=120.22, stdev= 9.06 00:14:37.585 clat percentiles (msec): 00:14:37.585 | 1.00th=[ 109], 5.00th=[ 111], 10.00th=[ 112], 20.00th=[ 114], 00:14:37.585 | 30.00th=[ 117], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 121], 00:14:37.585 | 70.00th=[ 121], 80.00th=[ 122], 90.00th=[ 123], 95.00th=[ 124], 00:14:37.585 | 99.00th=[ 150], 99.50th=[ 167], 99.90th=[ 213], 99.95th=[ 213], 00:14:37.585 | 99.99th=[ 220] 00:14:37.585 bw ( KiB/s): min=124152, max=139264, per=11.78%, avg=136009.40, stdev=3130.65, samples=20 00:14:37.585 iops : min= 484, max= 544, avg=531.10, stdev=12.34, samples=20 00:14:37.585 lat (msec) : 20=0.11%, 50=0.30%, 100=0.17%, 250=99.42% 00:14:37.585 cpu : usr=0.88%, sys=1.52%, ctx=7295, majf=0, minf=1 00:14:37.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:37.585 issued rwts: total=0,5377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.585 job5: (groupid=0, jobs=1): err= 0: pid=79551: Thu Nov 28 07:23:58 2024 00:14:37.585 write: IOPS=325, BW=81.5MiB/s (85.5MB/s)(830MiB/10185msec); 0 zone resets 00:14:37.585 slat (usec): min=26, max=46412, avg=3009.25, stdev=5254.72 00:14:37.585 clat (msec): min=28, max=373, avg=193.23, stdev=21.43 00:14:37.585 lat (msec): min=28, max=373, avg=196.24, stdev=21.07 00:14:37.585 clat percentiles (msec): 00:14:37.585 | 1.00th=[ 117], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:14:37.585 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 199], 00:14:37.585 | 70.00th=[ 199], 80.00th=[ 201], 90.00th=[ 205], 95.00th=[ 209], 00:14:37.585 | 99.00th=[ 271], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 372], 00:14:37.585 | 99.99th=[ 376] 00:14:37.585 bw ( KiB/s): min=77824, max=90112, per=7.22%, avg=83328.20, stdev=2569.17, samples=20 00:14:37.585 iops : min= 304, max= 352, avg=325.45, stdev=10.01, samples=20 00:14:37.585 lat (msec) : 50=0.36%, 100=0.48%, 250=98.01%, 500=1.14% 00:14:37.585 cpu : usr=0.97%, sys=1.06%, ctx=3124, majf=0, minf=1 00:14:37.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:37.585 issued rwts: total=0,3320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.585 job6: (groupid=0, jobs=1): err= 0: pid=79552: Thu Nov 28 07:23:58 2024 00:14:37.585 write: IOPS=324, BW=81.0MiB/s (85.0MB/s)(825MiB/10181msec); 0 zone resets 00:14:37.585 slat (usec): min=17, max=41148, avg=2972.73, stdev=5294.11 00:14:37.585 clat (msec): min=21, max=374, avg=194.36, stdev=24.40 00:14:37.585 lat (msec): min=21, max=374, avg=197.33, stdev=24.31 00:14:37.585 clat percentiles (msec): 00:14:37.585 | 1.00th=[ 92], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 188], 00:14:37.585 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 199], 00:14:37.585 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 211], 00:14:37.585 | 99.00th=[ 271], 99.50th=[ 321], 99.90th=[ 363], 99.95th=[ 376], 00:14:37.585 | 99.99th=[ 376] 00:14:37.585 bw ( KiB/s): min=79872, max=94720, per=7.18%, avg=82816.40, stdev=3286.84, samples=20 00:14:37.585 iops : min= 312, max= 370, avg=323.45, stdev=12.82, samples=20 00:14:37.585 lat (msec) : 50=0.36%, 100=1.21%, 250=97.27%, 500=1.15% 00:14:37.585 cpu : usr=0.83%, sys=1.06%, ctx=3443, majf=0, minf=1 00:14:37.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:37.585 issued rwts: total=0,3300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.585 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.585 job7: (groupid=0, jobs=1): err= 0: pid=79553: Thu Nov 28 07:23:58 2024 00:14:37.585 write: IOPS=879, BW=220MiB/s (231MB/s)(2215MiB/10066msec); 0 zone resets 00:14:37.585 slat (usec): min=17, max=8386, avg=1123.09, stdev=1885.81 00:14:37.585 clat (msec): min=5, max=136, avg=71.57, stdev= 4.80 00:14:37.585 lat (msec): min=5, max=136, avg=72.69, stdev= 4.57 00:14:37.585 clat percentiles (msec): 00:14:37.585 | 1.00th=[ 66], 5.00th=[ 68], 10.00th=[ 68], 20.00th=[ 69], 00:14:37.585 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:14:37.585 | 70.00th=[ 73], 80.00th=[ 74], 90.00th=[ 74], 95.00th=[ 75], 00:14:37.585 | 99.00th=[ 77], 99.50th=[ 86], 99.90th=[ 127], 99.95th=[ 132], 00:14:37.585 | 99.99th=[ 136] 00:14:37.585 bw ( KiB/s): min=216576, max=230400, per=19.50%, avg=225106.50, stdev=3050.21, samples=20 00:14:37.585 iops : min= 846, max= 900, avg=879.25, stdev=11.89, samples=20 00:14:37.585 lat (msec) : 10=0.03%, 20=0.07%, 50=0.32%, 100=99.24%, 250=0.34% 00:14:37.585 cpu : usr=1.45%, sys=2.51%, ctx=11220, majf=0, minf=1 00:14:37.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:14:37.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:37.586 issued rwts: total=0,8858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.586 job8: (groupid=0, jobs=1): err= 0: pid=79554: Thu Nov 28 07:23:58 2024 00:14:37.586 write: IOPS=322, BW=80.6MiB/s (84.6MB/s)(821MiB/10181msec); 0 zone resets 00:14:37.586 slat (usec): min=19, max=64117, avg=3041.06, stdev=5378.09 00:14:37.586 clat (msec): min=25, max=373, avg=195.27, stdev=21.62 00:14:37.586 lat (msec): min=25, max=373, avg=198.32, stdev=21.24 00:14:37.586 clat percentiles (msec): 00:14:37.586 | 1.00th=[ 114], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:14:37.586 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 197], 60.00th=[ 199], 00:14:37.586 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 213], 00:14:37.586 | 99.00th=[ 271], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 376], 00:14:37.586 | 99.99th=[ 376] 00:14:37.586 bw ( KiB/s): min=77824, max=86016, per=7.14%, avg=82432.40, stdev=2244.98, samples=20 00:14:37.586 iops : min= 304, max= 336, avg=321.95, stdev= 8.74, samples=20 00:14:37.586 lat (msec) : 50=0.37%, 100=0.49%, 250=97.99%, 500=1.16% 00:14:37.586 cpu : usr=0.95%, sys=1.09%, ctx=2954, majf=0, minf=1 00:14:37.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:37.586 issued rwts: total=0,3284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.586 job9: (groupid=0, jobs=1): err= 0: pid=79555: Thu Nov 28 07:23:58 2024 00:14:37.586 write: IOPS=326, BW=81.6MiB/s (85.5MB/s)(830MiB/10173msec); 0 zone resets 00:14:37.586 slat (usec): min=22, max=60416, avg=2947.77, stdev=5262.51 00:14:37.586 clat (msec): min=63, max=371, avg=193.12, stdev=20.28 00:14:37.586 lat (msec): min=63, max=371, avg=196.07, stdev=20.04 00:14:37.586 clat percentiles (msec): 00:14:37.586 | 1.00th=[ 123], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 186], 00:14:37.586 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 197], 60.00th=[ 199], 00:14:37.586 | 70.00th=[ 199], 80.00th=[ 201], 90.00th=[ 203], 95.00th=[ 205], 00:14:37.586 | 99.00th=[ 271], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 372], 00:14:37.586 | 99.99th=[ 372] 00:14:37.586 bw ( KiB/s): min=77979, max=94019, per=7.22%, avg=83318.95, stdev=3247.42, samples=20 00:14:37.586 iops : min= 304, max= 367, avg=325.35, stdev=12.72, samples=20 00:14:37.586 lat (msec) : 100=0.60%, 250=98.25%, 500=1.14% 00:14:37.586 cpu : usr=0.85%, sys=1.17%, ctx=2756, majf=0, minf=1 00:14:37.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:37.586 issued rwts: total=0,3319,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.586 job10: (groupid=0, jobs=1): err= 0: pid=79556: Thu Nov 28 07:23:58 2024 00:14:37.586 write: IOPS=318, BW=79.6MiB/s (83.5MB/s)(810MiB/10178msec); 0 zone resets 00:14:37.586 slat (usec): min=25, max=114922, avg=3081.19, stdev=5629.85 00:14:37.586 clat (msec): min=118, max=376, avg=197.68, stdev=16.84 00:14:37.586 lat (msec): min=118, max=376, avg=200.76, stdev=16.12 00:14:37.586 clat percentiles (msec): 00:14:37.586 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:14:37.586 | 30.00th=[ 192], 40.00th=[ 197], 50.00th=[ 199], 60.00th=[ 199], 00:14:37.586 | 70.00th=[ 201], 80.00th=[ 203], 90.00th=[ 207], 95.00th=[ 209], 00:14:37.586 | 99.00th=[ 275], 99.50th=[ 326], 99.90th=[ 368], 99.95th=[ 376], 00:14:37.586 | 99.99th=[ 376] 00:14:37.586 bw ( KiB/s): min=67584, max=86016, per=7.05%, avg=81323.55, stdev=3746.53, samples=20 00:14:37.586 iops : min= 264, max= 336, avg=317.60, stdev=14.60, samples=20 00:14:37.586 lat (msec) : 250=98.61%, 500=1.39% 00:14:37.586 cpu : usr=0.91%, sys=1.06%, ctx=2745, majf=0, minf=1 00:14:37.586 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:14:37.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:37.586 issued rwts: total=0,3241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.586 00:14:37.586 Run status group 0 (all jobs): 00:14:37.586 WRITE: bw=1127MiB/s (1182MB/s), 79.6MiB/s-220MiB/s (83.5MB/s-231MB/s), io=11.2GiB (12.0GB), run=10066-10185msec 00:14:37.586 00:14:37.586 Disk stats (read/write): 00:14:37.586 nvme0n1: ios=49/10620, merge=0/0, ticks=84/1211244, in_queue=1211328, util=97.91% 00:14:37.586 nvme10n1: ios=49/6393, merge=0/0, ticks=68/1206025, in_queue=1206093, util=97.97% 00:14:37.586 nvme1n1: ios=37/6440, merge=0/0, ticks=60/1207445, in_queue=1207505, util=98.07% 00:14:37.586 nvme2n1: ios=26/6413, merge=0/0, ticks=55/1209571, in_queue=1209626, util=98.24% 00:14:37.586 nvme3n1: ios=24/10597, merge=0/0, ticks=50/1211304, in_queue=1211354, util=98.16% 00:14:37.586 nvme4n1: ios=0/6498, merge=0/0, ticks=0/1208643, in_queue=1208643, util=98.31% 00:14:37.586 nvme5n1: ios=0/6455, merge=0/0, ticks=0/1208052, in_queue=1208052, util=98.32% 00:14:37.586 nvme6n1: ios=0/17536, merge=0/0, ticks=0/1212664, in_queue=1212664, util=98.39% 00:14:37.586 nvme7n1: ios=0/6427, merge=0/0, ticks=0/1208236, in_queue=1208236, util=98.76% 00:14:37.586 nvme8n1: ios=0/6494, merge=0/0, ticks=0/1208155, in_queue=1208155, util=98.77% 00:14:37.586 nvme9n1: ios=0/6339, merge=0/0, ticks=0/1206985, in_queue=1206985, util=98.86% 00:14:37.586 07:23:58 -- target/multiconnection.sh@36 -- # sync 00:14:37.586 07:23:58 -- target/multiconnection.sh@37 -- # seq 1 11 00:14:37.586 07:23:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:37.586 07:23:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:37.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.586 07:23:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:14:37.586 07:23:58 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.586 07:23:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.586 07:23:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:14:37.586 07:23:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.586 07:23:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:14:37.586 07:23:58 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.586 07:23:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:37.586 07:23:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.586 07:23:58 -- common/autotest_common.sh@10 -- # set +x 00:14:37.586 07:23:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.586 07:23:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:37.586 07:23:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:14:37.586 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:14:37.586 07:23:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:14:37.586 07:23:58 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.586 07:23:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.586 07:23:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:14:37.586 07:23:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.586 07:23:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:14:37.586 07:23:58 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.586 07:23:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:37.586 07:23:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.586 07:23:58 -- common/autotest_common.sh@10 -- # set +x 00:14:37.586 07:23:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.586 07:23:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:37.586 07:23:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:14:37.586 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:14:37.586 07:23:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:14:37.586 07:23:58 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.586 07:23:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.586 07:23:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:14:37.586 07:23:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.586 07:23:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:14:37.586 07:23:59 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.587 07:23:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:37.587 07:23:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.587 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:14:37.587 07:23:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.587 07:23:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:37.587 07:23:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:14:37.587 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:14:37.587 07:23:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:14:37.587 07:23:59 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.587 07:23:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:37.587 07:23:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.587 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:14:37.587 07:23:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.587 07:23:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:37.587 07:23:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:14:37.587 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:14:37.587 07:23:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:14:37.587 07:23:59 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:14:37.587 07:23:59 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.587 07:23:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:14:37.587 07:23:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.587 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:14:37.587 07:23:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.587 07:23:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:37.587 07:23:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:14:37.587 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:14:37.587 07:23:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:14:37.587 07:23:59 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:14:37.587 07:23:59 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.587 07:23:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:14:37.587 07:23:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.587 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:14:37.587 07:23:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.587 07:23:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:37.587 07:23:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:14:37.587 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:14:37.587 07:23:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:14:37.587 07:23:59 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:14:37.587 07:23:59 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.587 07:23:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:14:37.587 07:23:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.587 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:14:37.587 07:23:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.587 07:23:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:37.587 07:23:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:14:37.587 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:14:37.587 07:23:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:14:37.587 07:23:59 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:14:37.587 07:23:59 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.587 07:23:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:14:37.587 07:23:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.587 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:14:37.587 07:23:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.587 07:23:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:37.587 07:23:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:14:37.587 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:14:37.587 07:23:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:14:37.587 07:23:59 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:14:37.587 07:23:59 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.587 07:23:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:14:37.587 07:23:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.587 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:14:37.587 07:23:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.587 07:23:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:37.587 07:23:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:14:37.587 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:14:37.587 07:23:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:14:37.587 07:23:59 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.587 07:23:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:14:37.587 07:23:59 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.587 07:23:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:14:37.587 07:23:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.587 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:14:37.587 07:23:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.587 07:23:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:37.587 07:23:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:14:37.587 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:14:37.587 07:23:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:14:37.587 07:23:59 -- common/autotest_common.sh@1208 -- # local i=0 00:14:37.588 07:23:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:37.588 07:23:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:14:37.588 07:23:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:37.588 07:23:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:14:37.588 07:23:59 -- common/autotest_common.sh@1220 -- # return 0 00:14:37.588 07:23:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:14:37.588 07:23:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.588 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:14:37.588 07:23:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.588 07:23:59 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:14:37.588 07:23:59 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:14:37.588 07:23:59 -- target/multiconnection.sh@47 -- # nvmftestfini 00:14:37.588 07:23:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:37.588 07:23:59 -- nvmf/common.sh@116 -- # sync 00:14:37.588 07:23:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:37.588 07:23:59 -- nvmf/common.sh@119 -- # set +e 00:14:37.588 07:23:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:37.588 07:23:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:37.588 rmmod nvme_tcp 00:14:37.588 rmmod nvme_fabrics 00:14:37.588 rmmod nvme_keyring 00:14:37.588 07:23:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:37.588 07:23:59 -- nvmf/common.sh@123 -- # set -e 00:14:37.588 07:23:59 -- nvmf/common.sh@124 -- # return 0 00:14:37.588 07:23:59 -- nvmf/common.sh@477 -- # '[' -n 78859 ']' 00:14:37.588 07:23:59 -- nvmf/common.sh@478 -- # killprocess 78859 00:14:37.588 07:23:59 -- common/autotest_common.sh@936 -- # '[' -z 78859 ']' 00:14:37.588 07:23:59 -- common/autotest_common.sh@940 -- # kill -0 78859 00:14:37.588 07:23:59 -- common/autotest_common.sh@941 -- # uname 00:14:37.588 07:23:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:37.588 07:23:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78859 00:14:37.848 killing process with pid 78859 00:14:37.848 07:23:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:37.848 07:23:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:37.848 07:23:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78859' 00:14:37.848 07:23:59 -- common/autotest_common.sh@955 -- # kill 78859 00:14:37.848 07:23:59 -- common/autotest_common.sh@960 -- # wait 78859 00:14:38.417 07:24:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:38.417 07:24:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:38.417 07:24:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:38.417 07:24:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.417 07:24:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:38.417 07:24:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.417 07:24:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.417 07:24:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.417 07:24:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:38.417 00:14:38.417 real 0m49.623s 00:14:38.417 user 2m45.076s 00:14:38.417 sys 0m32.376s 00:14:38.417 ************************************ 00:14:38.417 END TEST nvmf_multiconnection 00:14:38.417 ************************************ 00:14:38.417 07:24:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:38.417 07:24:00 -- common/autotest_common.sh@10 -- # set +x 00:14:38.417 07:24:00 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:14:38.417 07:24:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:38.417 07:24:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.417 07:24:00 -- common/autotest_common.sh@10 -- # set +x 00:14:38.417 ************************************ 00:14:38.417 START TEST nvmf_initiator_timeout 00:14:38.417 ************************************ 00:14:38.417 07:24:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:14:38.417 * Looking for test storage... 00:14:38.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:38.417 07:24:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:38.417 07:24:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:38.417 07:24:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:38.417 07:24:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:38.417 07:24:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:38.417 07:24:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:38.417 07:24:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:38.417 07:24:00 -- scripts/common.sh@335 -- # IFS=.-: 00:14:38.417 07:24:00 -- scripts/common.sh@335 -- # read -ra ver1 00:14:38.417 07:24:00 -- scripts/common.sh@336 -- # IFS=.-: 00:14:38.417 07:24:00 -- scripts/common.sh@336 -- # read -ra ver2 00:14:38.417 07:24:00 -- scripts/common.sh@337 -- # local 'op=<' 00:14:38.417 07:24:00 -- scripts/common.sh@339 -- # ver1_l=2 00:14:38.417 07:24:00 -- scripts/common.sh@340 -- # ver2_l=1 00:14:38.417 07:24:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:38.417 07:24:00 -- scripts/common.sh@343 -- # case "$op" in 00:14:38.417 07:24:00 -- scripts/common.sh@344 -- # : 1 00:14:38.417 07:24:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:38.417 07:24:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:38.417 07:24:00 -- scripts/common.sh@364 -- # decimal 1 00:14:38.417 07:24:00 -- scripts/common.sh@352 -- # local d=1 00:14:38.417 07:24:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:38.417 07:24:00 -- scripts/common.sh@354 -- # echo 1 00:14:38.417 07:24:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:38.417 07:24:00 -- scripts/common.sh@365 -- # decimal 2 00:14:38.417 07:24:00 -- scripts/common.sh@352 -- # local d=2 00:14:38.417 07:24:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:38.417 07:24:00 -- scripts/common.sh@354 -- # echo 2 00:14:38.417 07:24:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:38.417 07:24:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:38.417 07:24:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:38.417 07:24:00 -- scripts/common.sh@367 -- # return 0 00:14:38.417 07:24:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:38.417 07:24:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:38.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.417 --rc genhtml_branch_coverage=1 00:14:38.417 --rc genhtml_function_coverage=1 00:14:38.417 --rc genhtml_legend=1 00:14:38.417 --rc geninfo_all_blocks=1 00:14:38.417 --rc geninfo_unexecuted_blocks=1 00:14:38.417 00:14:38.417 ' 00:14:38.417 07:24:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:38.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.417 --rc genhtml_branch_coverage=1 00:14:38.417 --rc genhtml_function_coverage=1 00:14:38.417 --rc genhtml_legend=1 00:14:38.418 --rc geninfo_all_blocks=1 00:14:38.418 --rc geninfo_unexecuted_blocks=1 00:14:38.418 00:14:38.418 ' 00:14:38.418 07:24:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:38.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.418 --rc genhtml_branch_coverage=1 00:14:38.418 --rc genhtml_function_coverage=1 00:14:38.418 --rc genhtml_legend=1 00:14:38.418 --rc geninfo_all_blocks=1 00:14:38.418 --rc geninfo_unexecuted_blocks=1 00:14:38.418 00:14:38.418 ' 00:14:38.418 07:24:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:38.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:38.418 --rc genhtml_branch_coverage=1 00:14:38.418 --rc genhtml_function_coverage=1 00:14:38.418 --rc genhtml_legend=1 00:14:38.418 --rc geninfo_all_blocks=1 00:14:38.418 --rc geninfo_unexecuted_blocks=1 00:14:38.418 00:14:38.418 ' 00:14:38.418 07:24:00 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:38.418 07:24:00 -- nvmf/common.sh@7 -- # uname -s 00:14:38.418 07:24:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.418 07:24:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.418 07:24:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.418 07:24:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.418 07:24:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.418 07:24:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.418 07:24:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.418 07:24:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.418 07:24:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.418 07:24:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.418 07:24:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:14:38.418 07:24:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:14:38.418 07:24:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.418 07:24:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.418 07:24:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:38.418 07:24:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.418 07:24:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.418 07:24:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.418 07:24:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.418 07:24:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.418 07:24:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.418 07:24:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.418 07:24:00 -- paths/export.sh@5 -- # export PATH 00:14:38.418 07:24:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.418 07:24:00 -- nvmf/common.sh@46 -- # : 0 00:14:38.418 07:24:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:38.418 07:24:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:38.418 07:24:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:38.418 07:24:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.418 07:24:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.418 07:24:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:38.418 07:24:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:38.418 07:24:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:38.418 07:24:00 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.418 07:24:00 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.418 07:24:00 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:14:38.418 07:24:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:38.418 07:24:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.418 07:24:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:38.418 07:24:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:38.418 07:24:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:38.418 07:24:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.418 07:24:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.418 07:24:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.418 07:24:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:38.418 07:24:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:38.418 07:24:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:38.418 07:24:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:38.418 07:24:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:38.418 07:24:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:38.418 07:24:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.418 07:24:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.418 07:24:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:38.418 07:24:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:38.418 07:24:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:38.418 07:24:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:38.418 07:24:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:38.418 07:24:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.418 07:24:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:38.418 07:24:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:38.418 07:24:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:38.418 07:24:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:38.418 07:24:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:38.677 07:24:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:38.678 Cannot find device "nvmf_tgt_br" 00:14:38.678 07:24:00 -- nvmf/common.sh@154 -- # true 00:14:38.678 07:24:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:38.678 Cannot find device "nvmf_tgt_br2" 00:14:38.678 07:24:00 -- nvmf/common.sh@155 -- # true 00:14:38.678 07:24:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:38.678 07:24:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:38.678 Cannot find device "nvmf_tgt_br" 00:14:38.678 07:24:00 -- nvmf/common.sh@157 -- # true 00:14:38.678 07:24:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:38.678 Cannot find device "nvmf_tgt_br2" 00:14:38.678 07:24:00 -- nvmf/common.sh@158 -- # true 00:14:38.678 07:24:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:38.678 07:24:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:38.678 07:24:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:38.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.678 07:24:00 -- nvmf/common.sh@161 -- # true 00:14:38.678 07:24:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:38.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.678 07:24:00 -- nvmf/common.sh@162 -- # true 00:14:38.678 07:24:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:38.678 07:24:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:38.678 07:24:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:38.678 07:24:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:38.678 07:24:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:38.678 07:24:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:38.678 07:24:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:38.678 07:24:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:38.678 07:24:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:38.678 07:24:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:38.678 07:24:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:38.678 07:24:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:38.678 07:24:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:38.678 07:24:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.678 07:24:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.678 07:24:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.678 07:24:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:38.678 07:24:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:38.678 07:24:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.678 07:24:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.937 07:24:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.937 07:24:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.937 07:24:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.937 07:24:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:38.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:14:38.937 00:14:38.937 --- 10.0.0.2 ping statistics --- 00:14:38.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.937 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:38.937 07:24:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:38.937 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.938 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.028 ms 00:14:38.938 00:14:38.938 --- 10.0.0.3 ping statistics --- 00:14:38.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.938 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:38.938 07:24:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:14:38.938 00:14:38.938 --- 10.0.0.1 ping statistics --- 00:14:38.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.938 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:14:38.938 07:24:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.938 07:24:00 -- nvmf/common.sh@421 -- # return 0 00:14:38.938 07:24:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:38.938 07:24:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.938 07:24:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:38.938 07:24:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:38.938 07:24:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.938 07:24:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:38.938 07:24:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:38.938 07:24:01 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:14:38.938 07:24:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:38.938 07:24:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:38.938 07:24:01 -- common/autotest_common.sh@10 -- # set +x 00:14:38.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.938 07:24:01 -- nvmf/common.sh@469 -- # nvmfpid=79929 00:14:38.938 07:24:01 -- nvmf/common.sh@470 -- # waitforlisten 79929 00:14:38.938 07:24:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.938 07:24:01 -- common/autotest_common.sh@829 -- # '[' -z 79929 ']' 00:14:38.938 07:24:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.938 07:24:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:38.938 07:24:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.938 07:24:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:38.938 07:24:01 -- common/autotest_common.sh@10 -- # set +x 00:14:38.938 [2024-11-28 07:24:01.065578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:38.938 [2024-11-28 07:24:01.065811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.938 [2024-11-28 07:24:01.205895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:39.197 [2024-11-28 07:24:01.280852] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:39.197 [2024-11-28 07:24:01.281130] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.197 [2024-11-28 07:24:01.281196] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.197 [2024-11-28 07:24:01.281527] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.197 [2024-11-28 07:24:01.281879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.197 [2024-11-28 07:24:01.281950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.197 [2024-11-28 07:24:01.282011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.197 [2024-11-28 07:24:01.282011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.134 07:24:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.134 07:24:02 -- common/autotest_common.sh@862 -- # return 0 00:14:40.134 07:24:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:40.134 07:24:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.134 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:14:40.134 07:24:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.134 07:24:02 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:40.134 07:24:02 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:40.134 07:24:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.134 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:14:40.134 Malloc0 00:14:40.134 07:24:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.134 07:24:02 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:14:40.134 07:24:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.134 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:14:40.134 Delay0 00:14:40.134 07:24:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.134 07:24:02 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:40.134 07:24:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.134 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:14:40.134 [2024-11-28 07:24:02.202165] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.134 07:24:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.134 07:24:02 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:40.134 07:24:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.134 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:14:40.134 07:24:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.134 07:24:02 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.134 07:24:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.134 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:14:40.134 07:24:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.134 07:24:02 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.134 07:24:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.134 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:14:40.134 [2024-11-28 07:24:02.234470] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.134 07:24:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.134 07:24:02 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:40.134 07:24:02 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:14:40.134 07:24:02 -- common/autotest_common.sh@1187 -- # local i=0 00:14:40.134 07:24:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.134 07:24:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:40.134 07:24:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:42.671 07:24:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:42.671 07:24:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:42.671 07:24:04 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.671 07:24:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:42.671 07:24:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.671 07:24:04 -- common/autotest_common.sh@1197 -- # return 0 00:14:42.671 07:24:04 -- target/initiator_timeout.sh@35 -- # fio_pid=79993 00:14:42.672 07:24:04 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:14:42.672 07:24:04 -- target/initiator_timeout.sh@37 -- # sleep 3 00:14:42.672 [global] 00:14:42.672 thread=1 00:14:42.672 invalidate=1 00:14:42.672 rw=write 00:14:42.672 time_based=1 00:14:42.672 runtime=60 00:14:42.672 ioengine=libaio 00:14:42.672 direct=1 00:14:42.672 bs=4096 00:14:42.672 iodepth=1 00:14:42.672 norandommap=0 00:14:42.672 numjobs=1 00:14:42.672 00:14:42.672 verify_dump=1 00:14:42.672 verify_backlog=512 00:14:42.672 verify_state_save=0 00:14:42.672 do_verify=1 00:14:42.672 verify=crc32c-intel 00:14:42.672 [job0] 00:14:42.672 filename=/dev/nvme0n1 00:14:42.672 Could not set queue depth (nvme0n1) 00:14:42.672 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:42.672 fio-3.35 00:14:42.672 Starting 1 thread 00:14:45.220 07:24:07 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:14:45.220 07:24:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.220 07:24:07 -- common/autotest_common.sh@10 -- # set +x 00:14:45.220 true 00:14:45.220 07:24:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.220 07:24:07 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:14:45.220 07:24:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.220 07:24:07 -- common/autotest_common.sh@10 -- # set +x 00:14:45.220 true 00:14:45.220 07:24:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.220 07:24:07 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:14:45.220 07:24:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.220 07:24:07 -- common/autotest_common.sh@10 -- # set +x 00:14:45.220 true 00:14:45.220 07:24:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.220 07:24:07 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:14:45.220 07:24:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.220 07:24:07 -- common/autotest_common.sh@10 -- # set +x 00:14:45.220 true 00:14:45.220 07:24:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.220 07:24:07 -- target/initiator_timeout.sh@45 -- # sleep 3 00:14:48.511 07:24:10 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:14:48.511 07:24:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.511 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:14:48.511 true 00:14:48.511 07:24:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.511 07:24:10 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:14:48.511 07:24:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.511 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:14:48.511 true 00:14:48.511 07:24:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.511 07:24:10 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:14:48.511 07:24:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.511 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:14:48.511 true 00:14:48.511 07:24:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.511 07:24:10 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:14:48.511 07:24:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.511 07:24:10 -- common/autotest_common.sh@10 -- # set +x 00:14:48.511 true 00:14:48.511 07:24:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.511 07:24:10 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:14:48.511 07:24:10 -- target/initiator_timeout.sh@54 -- # wait 79993 00:15:44.788 00:15:44.788 job0: (groupid=0, jobs=1): err= 0: pid=80024: Thu Nov 28 07:25:04 2024 00:15:44.788 read: IOPS=622, BW=2492KiB/s (2552kB/s)(146MiB/60000msec) 00:15:44.788 slat (usec): min=10, max=9074, avg=14.84, stdev=61.21 00:15:44.788 clat (usec): min=157, max=40843k, avg=1366.13, stdev=211258.37 00:15:44.788 lat (usec): min=168, max=40843k, avg=1380.98, stdev=211258.36 00:15:44.788 clat percentiles (usec): 00:15:44.788 | 1.00th=[ 196], 5.00th=[ 225], 10.00th=[ 237], 20.00th=[ 249], 00:15:44.788 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:15:44.788 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 322], 00:15:44.788 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 408], 99.95th=[ 676], 00:15:44.788 | 99.99th=[ 1319] 00:15:44.788 write: IOPS=625, BW=2501KiB/s (2561kB/s)(147MiB/60000msec); 0 zone resets 00:15:44.788 slat (usec): min=12, max=671, avg=20.65, stdev= 8.42 00:15:44.788 clat (usec): min=120, max=855, avg=199.67, stdev=25.64 00:15:44.788 lat (usec): min=136, max=946, avg=220.31, stdev=27.67 00:15:44.788 clat percentiles (usec): 00:15:44.788 | 1.00th=[ 149], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 180], 00:15:44.788 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:15:44.788 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 233], 95.00th=[ 243], 00:15:44.788 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 306], 99.95th=[ 338], 00:15:44.788 | 99.99th=[ 799] 00:15:44.788 bw ( KiB/s): min= 4056, max= 8232, per=100.00%, avg=7760.84, stdev=767.35, samples=38 00:15:44.788 iops : min= 1014, max= 2058, avg=1940.21, stdev=191.84, samples=38 00:15:44.788 lat (usec) : 250=58.59%, 500=41.36%, 750=0.02%, 1000=0.02% 00:15:44.788 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:15:44.788 cpu : usr=0.46%, sys=1.70%, ctx=74901, majf=0, minf=5 00:15:44.788 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:44.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.788 issued rwts: total=37376,37508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.788 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:44.788 00:15:44.788 Run status group 0 (all jobs): 00:15:44.788 READ: bw=2492KiB/s (2552kB/s), 2492KiB/s-2492KiB/s (2552kB/s-2552kB/s), io=146MiB (153MB), run=60000-60000msec 00:15:44.788 WRITE: bw=2501KiB/s (2561kB/s), 2501KiB/s-2501KiB/s (2561kB/s-2561kB/s), io=147MiB (154MB), run=60000-60000msec 00:15:44.788 00:15:44.788 Disk stats (read/write): 00:15:44.788 nvme0n1: ios=37326/37376, merge=0/0, ticks=10380/7801, in_queue=18181, util=99.63% 00:15:44.788 07:25:04 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.788 07:25:04 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:44.788 07:25:04 -- common/autotest_common.sh@1208 -- # local i=0 00:15:44.788 07:25:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:44.788 07:25:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.788 07:25:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:44.788 07:25:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:44.788 nvmf hotplug test: fio successful as expected 00:15:44.788 07:25:04 -- common/autotest_common.sh@1220 -- # return 0 00:15:44.788 07:25:04 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:15:44.788 07:25:04 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:15:44.788 07:25:04 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:44.788 07:25:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.788 07:25:04 -- common/autotest_common.sh@10 -- # set +x 00:15:44.788 07:25:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.788 07:25:04 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:15:44.788 07:25:04 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:15:44.788 07:25:04 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:15:44.788 07:25:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:44.788 07:25:04 -- nvmf/common.sh@116 -- # sync 00:15:44.788 07:25:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:44.788 07:25:04 -- nvmf/common.sh@119 -- # set +e 00:15:44.788 07:25:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:44.788 07:25:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:44.788 rmmod nvme_tcp 00:15:44.788 rmmod nvme_fabrics 00:15:44.788 rmmod nvme_keyring 00:15:44.788 07:25:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:44.788 07:25:04 -- nvmf/common.sh@123 -- # set -e 00:15:44.788 07:25:04 -- nvmf/common.sh@124 -- # return 0 00:15:44.788 07:25:04 -- nvmf/common.sh@477 -- # '[' -n 79929 ']' 00:15:44.788 07:25:04 -- nvmf/common.sh@478 -- # killprocess 79929 00:15:44.788 07:25:04 -- common/autotest_common.sh@936 -- # '[' -z 79929 ']' 00:15:44.788 07:25:04 -- common/autotest_common.sh@940 -- # kill -0 79929 00:15:44.788 07:25:04 -- common/autotest_common.sh@941 -- # uname 00:15:44.788 07:25:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:44.788 07:25:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79929 00:15:44.788 killing process with pid 79929 00:15:44.788 07:25:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:44.788 07:25:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:44.788 07:25:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79929' 00:15:44.788 07:25:04 -- common/autotest_common.sh@955 -- # kill 79929 00:15:44.788 07:25:04 -- common/autotest_common.sh@960 -- # wait 79929 00:15:44.788 07:25:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:44.788 07:25:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:44.788 07:25:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:44.788 07:25:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.788 07:25:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:44.788 07:25:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.788 07:25:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.788 07:25:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.788 07:25:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:44.788 00:15:44.788 real 1m4.677s 00:15:44.788 user 3m59.767s 00:15:44.788 sys 0m15.341s 00:15:44.788 07:25:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:44.788 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.788 ************************************ 00:15:44.788 END TEST nvmf_initiator_timeout 00:15:44.788 ************************************ 00:15:44.788 07:25:05 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:15:44.788 07:25:05 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:44.788 07:25:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.788 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.788 07:25:05 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:44.788 07:25:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.788 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.788 07:25:05 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:44.788 07:25:05 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:44.788 07:25:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:44.788 07:25:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:44.788 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.788 ************************************ 00:15:44.789 START TEST nvmf_identify 00:15:44.789 ************************************ 00:15:44.789 07:25:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:44.789 * Looking for test storage... 00:15:44.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:44.789 07:25:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:44.789 07:25:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:44.789 07:25:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:44.789 07:25:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:44.789 07:25:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:44.789 07:25:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:44.789 07:25:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:44.789 07:25:05 -- scripts/common.sh@335 -- # IFS=.-: 00:15:44.789 07:25:05 -- scripts/common.sh@335 -- # read -ra ver1 00:15:44.789 07:25:05 -- scripts/common.sh@336 -- # IFS=.-: 00:15:44.789 07:25:05 -- scripts/common.sh@336 -- # read -ra ver2 00:15:44.789 07:25:05 -- scripts/common.sh@337 -- # local 'op=<' 00:15:44.789 07:25:05 -- scripts/common.sh@339 -- # ver1_l=2 00:15:44.789 07:25:05 -- scripts/common.sh@340 -- # ver2_l=1 00:15:44.789 07:25:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:44.789 07:25:05 -- scripts/common.sh@343 -- # case "$op" in 00:15:44.789 07:25:05 -- scripts/common.sh@344 -- # : 1 00:15:44.789 07:25:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:44.789 07:25:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:44.789 07:25:05 -- scripts/common.sh@364 -- # decimal 1 00:15:44.789 07:25:05 -- scripts/common.sh@352 -- # local d=1 00:15:44.789 07:25:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:44.789 07:25:05 -- scripts/common.sh@354 -- # echo 1 00:15:44.789 07:25:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:44.789 07:25:05 -- scripts/common.sh@365 -- # decimal 2 00:15:44.789 07:25:05 -- scripts/common.sh@352 -- # local d=2 00:15:44.789 07:25:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:44.789 07:25:05 -- scripts/common.sh@354 -- # echo 2 00:15:44.789 07:25:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:44.789 07:25:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:44.789 07:25:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:44.789 07:25:05 -- scripts/common.sh@367 -- # return 0 00:15:44.789 07:25:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:44.789 07:25:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:44.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.789 --rc genhtml_branch_coverage=1 00:15:44.789 --rc genhtml_function_coverage=1 00:15:44.789 --rc genhtml_legend=1 00:15:44.789 --rc geninfo_all_blocks=1 00:15:44.789 --rc geninfo_unexecuted_blocks=1 00:15:44.789 00:15:44.789 ' 00:15:44.789 07:25:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:44.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.789 --rc genhtml_branch_coverage=1 00:15:44.789 --rc genhtml_function_coverage=1 00:15:44.789 --rc genhtml_legend=1 00:15:44.789 --rc geninfo_all_blocks=1 00:15:44.789 --rc geninfo_unexecuted_blocks=1 00:15:44.789 00:15:44.789 ' 00:15:44.789 07:25:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:44.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.789 --rc genhtml_branch_coverage=1 00:15:44.789 --rc genhtml_function_coverage=1 00:15:44.789 --rc genhtml_legend=1 00:15:44.789 --rc geninfo_all_blocks=1 00:15:44.789 --rc geninfo_unexecuted_blocks=1 00:15:44.789 00:15:44.789 ' 00:15:44.789 07:25:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:44.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.789 --rc genhtml_branch_coverage=1 00:15:44.789 --rc genhtml_function_coverage=1 00:15:44.789 --rc genhtml_legend=1 00:15:44.789 --rc geninfo_all_blocks=1 00:15:44.789 --rc geninfo_unexecuted_blocks=1 00:15:44.789 00:15:44.789 ' 00:15:44.789 07:25:05 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:44.789 07:25:05 -- nvmf/common.sh@7 -- # uname -s 00:15:44.789 07:25:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.789 07:25:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.789 07:25:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.789 07:25:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.789 07:25:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.789 07:25:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.789 07:25:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.789 07:25:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.789 07:25:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.789 07:25:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.789 07:25:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:15:44.789 07:25:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:15:44.789 07:25:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.789 07:25:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.789 07:25:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:44.789 07:25:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:44.789 07:25:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.789 07:25:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.789 07:25:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.789 07:25:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.789 07:25:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.789 07:25:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.789 07:25:05 -- paths/export.sh@5 -- # export PATH 00:15:44.789 07:25:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.789 07:25:05 -- nvmf/common.sh@46 -- # : 0 00:15:44.789 07:25:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:44.789 07:25:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:44.789 07:25:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:44.789 07:25:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.789 07:25:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.789 07:25:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:44.789 07:25:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:44.789 07:25:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:44.789 07:25:05 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.789 07:25:05 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.789 07:25:05 -- host/identify.sh@14 -- # nvmftestinit 00:15:44.789 07:25:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:44.789 07:25:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.789 07:25:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:44.789 07:25:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:44.789 07:25:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:44.789 07:25:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.789 07:25:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.789 07:25:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.789 07:25:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:44.789 07:25:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:44.789 07:25:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:44.789 07:25:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:44.789 07:25:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:44.789 07:25:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:44.789 07:25:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.789 07:25:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.789 07:25:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:44.789 07:25:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:44.789 07:25:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:44.789 07:25:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:44.789 07:25:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:44.789 07:25:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.789 07:25:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:44.789 07:25:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:44.789 07:25:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:44.789 07:25:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:44.789 07:25:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:44.789 07:25:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:44.789 Cannot find device "nvmf_tgt_br" 00:15:44.789 07:25:05 -- nvmf/common.sh@154 -- # true 00:15:44.789 07:25:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:44.789 Cannot find device "nvmf_tgt_br2" 00:15:44.789 07:25:05 -- nvmf/common.sh@155 -- # true 00:15:44.789 07:25:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:44.789 07:25:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:44.789 Cannot find device "nvmf_tgt_br" 00:15:44.790 07:25:05 -- nvmf/common.sh@157 -- # true 00:15:44.790 07:25:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:44.790 Cannot find device "nvmf_tgt_br2" 00:15:44.790 07:25:05 -- nvmf/common.sh@158 -- # true 00:15:44.790 07:25:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:44.790 07:25:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:44.790 07:25:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:44.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.790 07:25:05 -- nvmf/common.sh@161 -- # true 00:15:44.790 07:25:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:44.790 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.790 07:25:05 -- nvmf/common.sh@162 -- # true 00:15:44.790 07:25:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:44.790 07:25:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:44.790 07:25:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:44.790 07:25:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:44.790 07:25:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:44.790 07:25:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:44.790 07:25:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:44.790 07:25:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:44.790 07:25:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:44.790 07:25:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:44.790 07:25:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:44.790 07:25:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:44.790 07:25:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:44.790 07:25:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:44.790 07:25:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:44.790 07:25:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:44.790 07:25:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:44.790 07:25:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:44.790 07:25:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:44.790 07:25:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:44.790 07:25:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:44.790 07:25:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:44.790 07:25:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:44.790 07:25:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:44.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:15:44.790 00:15:44.790 --- 10.0.0.2 ping statistics --- 00:15:44.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.790 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:15:44.790 07:25:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:44.790 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:44.790 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:44.790 00:15:44.790 --- 10.0.0.3 ping statistics --- 00:15:44.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.790 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:44.790 07:25:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:44.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:44.790 00:15:44.790 --- 10.0.0.1 ping statistics --- 00:15:44.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.790 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:44.790 07:25:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.790 07:25:05 -- nvmf/common.sh@421 -- # return 0 00:15:44.790 07:25:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:44.790 07:25:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.790 07:25:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:44.790 07:25:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:44.790 07:25:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.790 07:25:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:44.790 07:25:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:44.790 07:25:05 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:44.790 07:25:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:44.790 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.790 07:25:05 -- host/identify.sh@19 -- # nvmfpid=80872 00:15:44.790 07:25:05 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:44.790 07:25:05 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:44.790 07:25:05 -- host/identify.sh@23 -- # waitforlisten 80872 00:15:44.790 07:25:05 -- common/autotest_common.sh@829 -- # '[' -z 80872 ']' 00:15:44.790 07:25:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.790 07:25:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.790 07:25:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.790 07:25:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.790 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:15:44.790 [2024-11-28 07:25:05.859038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:44.790 [2024-11-28 07:25:05.859151] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.790 [2024-11-28 07:25:06.005786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.790 [2024-11-28 07:25:06.103075] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:44.790 [2024-11-28 07:25:06.103531] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.790 [2024-11-28 07:25:06.103656] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.790 [2024-11-28 07:25:06.103768] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.790 [2024-11-28 07:25:06.104002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.790 [2024-11-28 07:25:06.104147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.790 [2024-11-28 07:25:06.104942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.790 [2024-11-28 07:25:06.105005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.790 07:25:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.790 07:25:06 -- common/autotest_common.sh@862 -- # return 0 00:15:44.790 07:25:06 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:44.790 07:25:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.790 07:25:06 -- common/autotest_common.sh@10 -- # set +x 00:15:44.790 [2024-11-28 07:25:06.893129] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.790 07:25:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.790 07:25:06 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:44.790 07:25:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:44.790 07:25:06 -- common/autotest_common.sh@10 -- # set +x 00:15:44.790 07:25:06 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:44.790 07:25:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.790 07:25:06 -- common/autotest_common.sh@10 -- # set +x 00:15:44.790 Malloc0 00:15:44.790 07:25:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.790 07:25:06 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:44.790 07:25:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.790 07:25:06 -- common/autotest_common.sh@10 -- # set +x 00:15:44.790 07:25:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.790 07:25:06 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:44.790 07:25:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.790 07:25:06 -- common/autotest_common.sh@10 -- # set +x 00:15:44.790 07:25:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.790 07:25:07 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.790 07:25:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.790 07:25:07 -- common/autotest_common.sh@10 -- # set +x 00:15:44.790 [2024-11-28 07:25:07.007649] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.790 07:25:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.790 07:25:07 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:44.790 07:25:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.790 07:25:07 -- common/autotest_common.sh@10 -- # set +x 00:15:44.790 07:25:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.790 07:25:07 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:44.790 07:25:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.790 07:25:07 -- common/autotest_common.sh@10 -- # set +x 00:15:44.790 [2024-11-28 07:25:07.023413] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:44.790 [ 00:15:44.790 { 00:15:44.790 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:44.790 "subtype": "Discovery", 00:15:44.790 "listen_addresses": [ 00:15:44.790 { 00:15:44.790 "transport": "TCP", 00:15:44.790 "trtype": "TCP", 00:15:44.790 "adrfam": "IPv4", 00:15:44.790 "traddr": "10.0.0.2", 00:15:44.790 "trsvcid": "4420" 00:15:44.790 } 00:15:44.790 ], 00:15:44.790 "allow_any_host": true, 00:15:44.790 "hosts": [] 00:15:44.790 }, 00:15:44.790 { 00:15:44.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.790 "subtype": "NVMe", 00:15:44.790 "listen_addresses": [ 00:15:44.790 { 00:15:44.790 "transport": "TCP", 00:15:44.790 "trtype": "TCP", 00:15:44.790 "adrfam": "IPv4", 00:15:44.790 "traddr": "10.0.0.2", 00:15:44.790 "trsvcid": "4420" 00:15:44.790 } 00:15:44.790 ], 00:15:44.790 "allow_any_host": true, 00:15:44.790 "hosts": [], 00:15:44.790 "serial_number": "SPDK00000000000001", 00:15:44.791 "model_number": "SPDK bdev Controller", 00:15:44.791 "max_namespaces": 32, 00:15:44.791 "min_cntlid": 1, 00:15:44.791 "max_cntlid": 65519, 00:15:44.791 "namespaces": [ 00:15:44.791 { 00:15:44.791 "nsid": 1, 00:15:44.791 "bdev_name": "Malloc0", 00:15:44.791 "name": "Malloc0", 00:15:44.791 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:44.791 "eui64": "ABCDEF0123456789", 00:15:44.791 "uuid": "299d91af-90ba-4197-bdba-b4063ad98d9b" 00:15:44.791 } 00:15:44.791 ] 00:15:44.791 } 00:15:44.791 ] 00:15:44.791 07:25:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.791 07:25:07 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:44.791 [2024-11-28 07:25:07.060740] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:44.791 [2024-11-28 07:25:07.060877] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80911 ] 00:15:45.052 [2024-11-28 07:25:07.201487] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:45.052 [2024-11-28 07:25:07.201571] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:45.052 [2024-11-28 07:25:07.201578] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:45.052 [2024-11-28 07:25:07.201591] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:45.052 [2024-11-28 07:25:07.201605] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:15:45.052 [2024-11-28 07:25:07.201776] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:45.052 [2024-11-28 07:25:07.201835] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x871510 0 00:15:45.052 [2024-11-28 07:25:07.208359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:45.052 [2024-11-28 07:25:07.208391] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:45.052 [2024-11-28 07:25:07.208397] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:45.052 [2024-11-28 07:25:07.208401] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:45.052 [2024-11-28 07:25:07.208461] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.208468] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.208473] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871510) 00:15:45.052 [2024-11-28 07:25:07.208489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:45.052 [2024-11-28 07:25:07.208520] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd8a0, cid 0, qid 0 00:15:45.052 [2024-11-28 07:25:07.216326] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.052 [2024-11-28 07:25:07.216347] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.052 [2024-11-28 07:25:07.216352] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216358] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bd8a0) on tqpair=0x871510 00:15:45.052 [2024-11-28 07:25:07.216370] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:45.052 [2024-11-28 07:25:07.216378] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:45.052 [2024-11-28 07:25:07.216385] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:45.052 [2024-11-28 07:25:07.216402] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216408] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216413] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871510) 00:15:45.052 [2024-11-28 07:25:07.216423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.052 [2024-11-28 07:25:07.216451] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd8a0, cid 0, qid 0 00:15:45.052 [2024-11-28 07:25:07.216535] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.052 [2024-11-28 07:25:07.216542] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.052 [2024-11-28 07:25:07.216547] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216551] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bd8a0) on tqpair=0x871510 00:15:45.052 [2024-11-28 07:25:07.216558] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:45.052 [2024-11-28 07:25:07.216566] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:45.052 [2024-11-28 07:25:07.216575] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216584] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216588] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871510) 00:15:45.052 [2024-11-28 07:25:07.216596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.052 [2024-11-28 07:25:07.216616] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd8a0, cid 0, qid 0 00:15:45.052 [2024-11-28 07:25:07.216670] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.052 [2024-11-28 07:25:07.216677] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.052 [2024-11-28 07:25:07.216681] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216685] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bd8a0) on tqpair=0x871510 00:15:45.052 [2024-11-28 07:25:07.216692] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:45.052 [2024-11-28 07:25:07.216701] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:45.052 [2024-11-28 07:25:07.216709] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216714] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216718] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871510) 00:15:45.052 [2024-11-28 07:25:07.216726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.052 [2024-11-28 07:25:07.216744] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd8a0, cid 0, qid 0 00:15:45.052 [2024-11-28 07:25:07.216803] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.052 [2024-11-28 07:25:07.216810] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.052 [2024-11-28 07:25:07.216814] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216818] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bd8a0) on tqpair=0x871510 00:15:45.052 [2024-11-28 07:25:07.216825] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:45.052 [2024-11-28 07:25:07.216836] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216840] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216845] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871510) 00:15:45.052 [2024-11-28 07:25:07.216852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.052 [2024-11-28 07:25:07.216870] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd8a0, cid 0, qid 0 00:15:45.052 [2024-11-28 07:25:07.216923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.052 [2024-11-28 07:25:07.216930] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.052 [2024-11-28 07:25:07.216934] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.052 [2024-11-28 07:25:07.216939] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bd8a0) on tqpair=0x871510 00:15:45.052 [2024-11-28 07:25:07.216944] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:45.052 [2024-11-28 07:25:07.216950] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:45.052 [2024-11-28 07:25:07.216958] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:45.052 [2024-11-28 07:25:07.217064] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:45.052 [2024-11-28 07:25:07.217070] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:45.052 [2024-11-28 07:25:07.217080] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217085] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217089] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871510) 00:15:45.053 [2024-11-28 07:25:07.217097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.053 [2024-11-28 07:25:07.217116] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd8a0, cid 0, qid 0 00:15:45.053 [2024-11-28 07:25:07.217177] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.053 [2024-11-28 07:25:07.217185] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.053 [2024-11-28 07:25:07.217189] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217193] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bd8a0) on tqpair=0x871510 00:15:45.053 [2024-11-28 07:25:07.217199] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:45.053 [2024-11-28 07:25:07.217210] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217215] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217219] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871510) 00:15:45.053 [2024-11-28 07:25:07.217227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.053 [2024-11-28 07:25:07.217244] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd8a0, cid 0, qid 0 00:15:45.053 [2024-11-28 07:25:07.217299] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.053 [2024-11-28 07:25:07.217318] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.053 [2024-11-28 07:25:07.217324] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217329] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bd8a0) on tqpair=0x871510 00:15:45.053 [2024-11-28 07:25:07.217334] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:45.053 [2024-11-28 07:25:07.217340] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:45.053 [2024-11-28 07:25:07.217349] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:45.053 [2024-11-28 07:25:07.217367] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:45.053 [2024-11-28 07:25:07.217378] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217382] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217387] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871510) 00:15:45.053 [2024-11-28 07:25:07.217396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.053 [2024-11-28 07:25:07.217418] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd8a0, cid 0, qid 0 00:15:45.053 [2024-11-28 07:25:07.217522] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.053 [2024-11-28 07:25:07.217530] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.053 [2024-11-28 07:25:07.217535] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217539] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x871510): datao=0, datal=4096, cccid=0 00:15:45.053 [2024-11-28 07:25:07.217544] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bd8a0) on tqpair(0x871510): expected_datao=0, payload_size=4096 00:15:45.053 [2024-11-28 07:25:07.217554] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217559] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217568] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.053 [2024-11-28 07:25:07.217575] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.053 [2024-11-28 07:25:07.217579] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217584] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bd8a0) on tqpair=0x871510 00:15:45.053 [2024-11-28 07:25:07.217593] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:45.053 [2024-11-28 07:25:07.217599] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:45.053 [2024-11-28 07:25:07.217604] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:45.053 [2024-11-28 07:25:07.217611] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:45.053 [2024-11-28 07:25:07.217617] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:45.053 [2024-11-28 07:25:07.217622] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:45.053 [2024-11-28 07:25:07.217637] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:45.053 [2024-11-28 07:25:07.217646] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217650] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217654] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871510) 00:15:45.053 [2024-11-28 07:25:07.217663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:45.053 [2024-11-28 07:25:07.217684] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd8a0, cid 0, qid 0 00:15:45.053 [2024-11-28 07:25:07.217754] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.053 [2024-11-28 07:25:07.217761] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.053 [2024-11-28 07:25:07.217765] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217770] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bd8a0) on tqpair=0x871510 00:15:45.053 [2024-11-28 07:25:07.217779] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217787] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871510) 00:15:45.053 [2024-11-28 07:25:07.217794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.053 [2024-11-28 07:25:07.217801] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217805] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217809] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x871510) 00:15:45.053 [2024-11-28 07:25:07.217816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.053 [2024-11-28 07:25:07.217823] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217827] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217831] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x871510) 00:15:45.053 [2024-11-28 07:25:07.217837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.053 [2024-11-28 07:25:07.217844] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217849] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217853] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871510) 00:15:45.053 [2024-11-28 07:25:07.217859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.053 [2024-11-28 07:25:07.217866] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:45.053 [2024-11-28 07:25:07.217879] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:45.053 [2024-11-28 07:25:07.217887] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217892] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.217896] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x871510) 00:15:45.053 [2024-11-28 07:25:07.217904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.053 [2024-11-28 07:25:07.217925] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bd8a0, cid 0, qid 0 00:15:45.053 [2024-11-28 07:25:07.217933] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bda00, cid 1, qid 0 00:15:45.053 [2024-11-28 07:25:07.217938] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdb60, cid 2, qid 0 00:15:45.053 [2024-11-28 07:25:07.217944] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdcc0, cid 3, qid 0 00:15:45.053 [2024-11-28 07:25:07.217949] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bde20, cid 4, qid 0 00:15:45.053 [2024-11-28 07:25:07.218058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.053 [2024-11-28 07:25:07.218066] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.053 [2024-11-28 07:25:07.218070] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.218074] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bde20) on tqpair=0x871510 00:15:45.053 [2024-11-28 07:25:07.218080] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:45.053 [2024-11-28 07:25:07.218087] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:45.053 [2024-11-28 07:25:07.218098] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.218103] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.218107] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x871510) 00:15:45.053 [2024-11-28 07:25:07.218114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.053 [2024-11-28 07:25:07.218133] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bde20, cid 4, qid 0 00:15:45.053 [2024-11-28 07:25:07.218200] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.053 [2024-11-28 07:25:07.218208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.053 [2024-11-28 07:25:07.218212] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.053 [2024-11-28 07:25:07.218216] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x871510): datao=0, datal=4096, cccid=4 00:15:45.053 [2024-11-28 07:25:07.218221] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bde20) on tqpair(0x871510): expected_datao=0, payload_size=4096 00:15:45.053 [2024-11-28 07:25:07.218230] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218234] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218243] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.054 [2024-11-28 07:25:07.218249] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.054 [2024-11-28 07:25:07.218254] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218258] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bde20) on tqpair=0x871510 00:15:45.054 [2024-11-28 07:25:07.218272] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:45.054 [2024-11-28 07:25:07.218300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218306] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218324] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x871510) 00:15:45.054 [2024-11-28 07:25:07.218332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.054 [2024-11-28 07:25:07.218341] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218346] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218350] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x871510) 00:15:45.054 [2024-11-28 07:25:07.218357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.054 [2024-11-28 07:25:07.218384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bde20, cid 4, qid 0 00:15:45.054 [2024-11-28 07:25:07.218392] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdf80, cid 5, qid 0 00:15:45.054 [2024-11-28 07:25:07.218507] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.054 [2024-11-28 07:25:07.218523] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.054 [2024-11-28 07:25:07.218528] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218532] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x871510): datao=0, datal=1024, cccid=4 00:15:45.054 [2024-11-28 07:25:07.218537] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bde20) on tqpair(0x871510): expected_datao=0, payload_size=1024 00:15:45.054 [2024-11-28 07:25:07.218546] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218550] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218557] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.054 [2024-11-28 07:25:07.218563] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.054 [2024-11-28 07:25:07.218567] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218571] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bdf80) on tqpair=0x871510 00:15:45.054 [2024-11-28 07:25:07.218590] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.054 [2024-11-28 07:25:07.218599] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.054 [2024-11-28 07:25:07.218603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bde20) on tqpair=0x871510 00:15:45.054 [2024-11-28 07:25:07.218619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218624] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218628] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x871510) 00:15:45.054 [2024-11-28 07:25:07.218636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.054 [2024-11-28 07:25:07.218660] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bde20, cid 4, qid 0 00:15:45.054 [2024-11-28 07:25:07.218742] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.054 [2024-11-28 07:25:07.218750] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.054 [2024-11-28 07:25:07.218754] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218758] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x871510): datao=0, datal=3072, cccid=4 00:15:45.054 [2024-11-28 07:25:07.218763] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bde20) on tqpair(0x871510): expected_datao=0, payload_size=3072 00:15:45.054 [2024-11-28 07:25:07.218771] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218776] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218784] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.054 [2024-11-28 07:25:07.218791] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.054 [2024-11-28 07:25:07.218796] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218800] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bde20) on tqpair=0x871510 00:15:45.054 [2024-11-28 07:25:07.218810] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218815] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218819] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x871510) 00:15:45.054 [2024-11-28 07:25:07.218827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.054 [2024-11-28 07:25:07.218850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bde20, cid 4, qid 0 00:15:45.054 [2024-11-28 07:25:07.218922] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.054 [2024-11-28 07:25:07.218929] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.054 [2024-11-28 07:25:07.218934] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218938] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x871510): datao=0, datal=8, cccid=4 00:15:45.054 [2024-11-28 07:25:07.218943] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8bde20) on tqpair(0x871510): expected_datao=0, payload_size=8 00:15:45.054 [2024-11-28 07:25:07.218951] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218955] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.054 [2024-11-28 07:25:07.218970] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.054 [2024-11-28 07:25:07.218977] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.054 [2024-11-28 07:25:07.218981] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.054 ===================================================== 00:15:45.054 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:45.054 ===================================================== 00:15:45.054 Controller Capabilities/Features 00:15:45.054 ================================ 00:15:45.054 Vendor ID: 0000 00:15:45.054 Subsystem Vendor ID: 0000 00:15:45.054 Serial Number: .................... 00:15:45.054 Model Number: ........................................ 00:15:45.054 Firmware Version: 24.01.1 00:15:45.054 Recommended Arb Burst: 0 00:15:45.054 IEEE OUI Identifier: 00 00 00 00:15:45.054 Multi-path I/O 00:15:45.054 May have multiple subsystem ports: No 00:15:45.054 May have multiple controllers: No 00:15:45.054 Associated with SR-IOV VF: No 00:15:45.054 Max Data Transfer Size: 131072 00:15:45.054 Max Number of Namespaces: 0 00:15:45.054 Max Number of I/O Queues: 1024 00:15:45.054 NVMe Specification Version (VS): 1.3 00:15:45.054 NVMe Specification Version (Identify): 1.3 00:15:45.054 Maximum Queue Entries: 128 00:15:45.054 Contiguous Queues Required: Yes 00:15:45.054 Arbitration Mechanisms Supported 00:15:45.054 Weighted Round Robin: Not Supported 00:15:45.054 Vendor Specific: Not Supported 00:15:45.054 Reset Timeout: 15000 ms 00:15:45.054 Doorbell Stride: 4 bytes 00:15:45.054 NVM Subsystem Reset: Not Supported 00:15:45.054 Command Sets Supported 00:15:45.054 NVM Command Set: Supported 00:15:45.054 Boot Partition: Not Supported 00:15:45.054 Memory Page Size Minimum: 4096 bytes 00:15:45.054 Memory Page Size Maximum: 4096 bytes 00:15:45.054 Persistent Memory Region: Not Supported 00:15:45.054 Optional Asynchronous Events Supported 00:15:45.054 Namespace Attribute Notices: Not Supported 00:15:45.054 Firmware Activation Notices: Not Supported 00:15:45.054 ANA Change Notices: Not Supported 00:15:45.054 PLE Aggregate Log Change Notices: Not Supported 00:15:45.054 LBA Status Info Alert Notices: Not Supported 00:15:45.054 EGE Aggregate Log Change Notices: Not Supported 00:15:45.054 Normal NVM Subsystem Shutdown event: Not Supported 00:15:45.054 Zone Descriptor Change Notices: Not Supported 00:15:45.054 Discovery Log Change Notices: Supported 00:15:45.054 Controller Attributes 00:15:45.054 128-bit Host Identifier: Not Supported 00:15:45.054 Non-Operational Permissive Mode: Not Supported 00:15:45.054 NVM Sets: Not Supported 00:15:45.054 Read Recovery Levels: Not Supported 00:15:45.054 Endurance Groups: Not Supported 00:15:45.054 Predictable Latency Mode: Not Supported 00:15:45.054 Traffic Based Keep ALive: Not Supported 00:15:45.054 Namespace Granularity: Not Supported 00:15:45.054 SQ Associations: Not Supported 00:15:45.054 UUID List: Not Supported 00:15:45.054 Multi-Domain Subsystem: Not Supported 00:15:45.054 Fixed Capacity Management: Not Supported 00:15:45.054 Variable Capacity Management: Not Supported 00:15:45.054 Delete Endurance Group: Not Supported 00:15:45.054 Delete NVM Set: Not Supported 00:15:45.054 Extended LBA Formats Supported: Not Supported 00:15:45.054 Flexible Data Placement Supported: Not Supported 00:15:45.054 00:15:45.054 Controller Memory Buffer Support 00:15:45.054 ================================ 00:15:45.054 Supported: No 00:15:45.054 00:15:45.054 Persistent Memory Region Support 00:15:45.054 ================================ 00:15:45.054 Supported: No 00:15:45.054 00:15:45.054 Admin Command Set Attributes 00:15:45.054 ============================ 00:15:45.054 Security Send/Receive: Not Supported 00:15:45.054 Format NVM: Not Supported 00:15:45.054 Firmware Activate/Download: Not Supported 00:15:45.055 Namespace Management: Not Supported 00:15:45.055 Device Self-Test: Not Supported 00:15:45.055 Directives: Not Supported 00:15:45.055 NVMe-MI: Not Supported 00:15:45.055 Virtualization Management: Not Supported 00:15:45.055 Doorbell Buffer Config: Not Supported 00:15:45.055 Get LBA Status Capability: Not Supported 00:15:45.055 Command & Feature Lockdown Capability: Not Supported 00:15:45.055 Abort Command Limit: 1 00:15:45.055 Async Event Request Limit: 4 00:15:45.055 Number of Firmware Slots: N/A 00:15:45.055 Firmware Slot 1 Read-Only: N/A 00:15:45.055 Fi[2024-11-28 07:25:07.218986] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bde20) on tqpair=0x871510 00:15:45.055 rmware Activation Without Reset: N/A 00:15:45.055 Multiple Update Detection Support: N/A 00:15:45.055 Firmware Update Granularity: No Information Provided 00:15:45.055 Per-Namespace SMART Log: No 00:15:45.055 Asymmetric Namespace Access Log Page: Not Supported 00:15:45.055 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:45.055 Command Effects Log Page: Not Supported 00:15:45.055 Get Log Page Extended Data: Supported 00:15:45.055 Telemetry Log Pages: Not Supported 00:15:45.055 Persistent Event Log Pages: Not Supported 00:15:45.055 Supported Log Pages Log Page: May Support 00:15:45.055 Commands Supported & Effects Log Page: Not Supported 00:15:45.055 Feature Identifiers & Effects Log Page:May Support 00:15:45.055 NVMe-MI Commands & Effects Log Page: May Support 00:15:45.055 Data Area 4 for Telemetry Log: Not Supported 00:15:45.055 Error Log Page Entries Supported: 128 00:15:45.055 Keep Alive: Not Supported 00:15:45.055 00:15:45.055 NVM Command Set Attributes 00:15:45.055 ========================== 00:15:45.055 Submission Queue Entry Size 00:15:45.055 Max: 1 00:15:45.055 Min: 1 00:15:45.055 Completion Queue Entry Size 00:15:45.055 Max: 1 00:15:45.055 Min: 1 00:15:45.055 Number of Namespaces: 0 00:15:45.055 Compare Command: Not Supported 00:15:45.055 Write Uncorrectable Command: Not Supported 00:15:45.055 Dataset Management Command: Not Supported 00:15:45.055 Write Zeroes Command: Not Supported 00:15:45.055 Set Features Save Field: Not Supported 00:15:45.055 Reservations: Not Supported 00:15:45.055 Timestamp: Not Supported 00:15:45.055 Copy: Not Supported 00:15:45.055 Volatile Write Cache: Not Present 00:15:45.055 Atomic Write Unit (Normal): 1 00:15:45.055 Atomic Write Unit (PFail): 1 00:15:45.055 Atomic Compare & Write Unit: 1 00:15:45.055 Fused Compare & Write: Supported 00:15:45.055 Scatter-Gather List 00:15:45.055 SGL Command Set: Supported 00:15:45.055 SGL Keyed: Supported 00:15:45.055 SGL Bit Bucket Descriptor: Not Supported 00:15:45.055 SGL Metadata Pointer: Not Supported 00:15:45.055 Oversized SGL: Not Supported 00:15:45.055 SGL Metadata Address: Not Supported 00:15:45.055 SGL Offset: Supported 00:15:45.055 Transport SGL Data Block: Not Supported 00:15:45.055 Replay Protected Memory Block: Not Supported 00:15:45.055 00:15:45.055 Firmware Slot Information 00:15:45.055 ========================= 00:15:45.055 Active slot: 0 00:15:45.055 00:15:45.055 00:15:45.055 Error Log 00:15:45.055 ========= 00:15:45.055 00:15:45.055 Active Namespaces 00:15:45.055 ================= 00:15:45.055 Discovery Log Page 00:15:45.055 ================== 00:15:45.055 Generation Counter: 2 00:15:45.055 Number of Records: 2 00:15:45.055 Record Format: 0 00:15:45.055 00:15:45.055 Discovery Log Entry 0 00:15:45.055 ---------------------- 00:15:45.055 Transport Type: 3 (TCP) 00:15:45.055 Address Family: 1 (IPv4) 00:15:45.055 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:45.055 Entry Flags: 00:15:45.055 Duplicate Returned Information: 1 00:15:45.055 Explicit Persistent Connection Support for Discovery: 1 00:15:45.055 Transport Requirements: 00:15:45.055 Secure Channel: Not Required 00:15:45.055 Port ID: 0 (0x0000) 00:15:45.055 Controller ID: 65535 (0xffff) 00:15:45.055 Admin Max SQ Size: 128 00:15:45.055 Transport Service Identifier: 4420 00:15:45.055 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:45.055 Transport Address: 10.0.0.2 00:15:45.055 Discovery Log Entry 1 00:15:45.055 ---------------------- 00:15:45.055 Transport Type: 3 (TCP) 00:15:45.055 Address Family: 1 (IPv4) 00:15:45.055 Subsystem Type: 2 (NVM Subsystem) 00:15:45.055 Entry Flags: 00:15:45.055 Duplicate Returned Information: 0 00:15:45.055 Explicit Persistent Connection Support for Discovery: 0 00:15:45.055 Transport Requirements: 00:15:45.055 Secure Channel: Not Required 00:15:45.055 Port ID: 0 (0x0000) 00:15:45.055 Controller ID: 65535 (0xffff) 00:15:45.055 Admin Max SQ Size: 128 00:15:45.055 Transport Service Identifier: 4420 00:15:45.055 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:45.055 Transport Address: 10.0.0.2 [2024-11-28 07:25:07.219158] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:45.055 [2024-11-28 07:25:07.219178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.055 [2024-11-28 07:25:07.219187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.055 [2024-11-28 07:25:07.219193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.055 [2024-11-28 07:25:07.219200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.055 [2024-11-28 07:25:07.219210] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871510) 00:15:45.055 [2024-11-28 07:25:07.219227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.055 [2024-11-28 07:25:07.219252] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdcc0, cid 3, qid 0 00:15:45.055 [2024-11-28 07:25:07.219341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.055 [2024-11-28 07:25:07.219367] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.055 [2024-11-28 07:25:07.219372] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219376] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bdcc0) on tqpair=0x871510 00:15:45.055 [2024-11-28 07:25:07.219385] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219390] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219394] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871510) 00:15:45.055 [2024-11-28 07:25:07.219402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.055 [2024-11-28 07:25:07.219427] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdcc0, cid 3, qid 0 00:15:45.055 [2024-11-28 07:25:07.219512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.055 [2024-11-28 07:25:07.219525] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.055 [2024-11-28 07:25:07.219530] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219535] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bdcc0) on tqpair=0x871510 00:15:45.055 [2024-11-28 07:25:07.219541] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:45.055 [2024-11-28 07:25:07.219547] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:45.055 [2024-11-28 07:25:07.219557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219562] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219567] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871510) 00:15:45.055 [2024-11-28 07:25:07.219575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.055 [2024-11-28 07:25:07.219594] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdcc0, cid 3, qid 0 00:15:45.055 [2024-11-28 07:25:07.219658] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.055 [2024-11-28 07:25:07.219665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.055 [2024-11-28 07:25:07.219669] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219673] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bdcc0) on tqpair=0x871510 00:15:45.055 [2024-11-28 07:25:07.219685] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219690] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219694] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871510) 00:15:45.055 [2024-11-28 07:25:07.219702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.055 [2024-11-28 07:25:07.219719] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdcc0, cid 3, qid 0 00:15:45.055 [2024-11-28 07:25:07.219793] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.055 [2024-11-28 07:25:07.219800] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.055 [2024-11-28 07:25:07.219804] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.055 [2024-11-28 07:25:07.219808] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bdcc0) on tqpair=0x871510 00:15:45.056 [2024-11-28 07:25:07.219818] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.219824] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.219827] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871510) 00:15:45.056 [2024-11-28 07:25:07.219835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.056 [2024-11-28 07:25:07.219852] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdcc0, cid 3, qid 0 00:15:45.056 [2024-11-28 07:25:07.219913] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.056 [2024-11-28 07:25:07.219925] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.056 [2024-11-28 07:25:07.219929] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.219934] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bdcc0) on tqpair=0x871510 00:15:45.056 [2024-11-28 07:25:07.219945] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.219950] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.219954] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871510) 00:15:45.056 [2024-11-28 07:25:07.219961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.056 [2024-11-28 07:25:07.220006] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdcc0, cid 3, qid 0 00:15:45.056 [2024-11-28 07:25:07.220058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.056 [2024-11-28 07:25:07.220065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.056 [2024-11-28 07:25:07.220069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.220074] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bdcc0) on tqpair=0x871510 00:15:45.056 [2024-11-28 07:25:07.220085] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.220090] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.220094] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871510) 00:15:45.056 [2024-11-28 07:25:07.220102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.056 [2024-11-28 07:25:07.220119] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdcc0, cid 3, qid 0 00:15:45.056 [2024-11-28 07:25:07.220173] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.056 [2024-11-28 07:25:07.220180] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.056 [2024-11-28 07:25:07.220184] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.220189] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bdcc0) on tqpair=0x871510 00:15:45.056 [2024-11-28 07:25:07.220199] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.220204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.220208] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871510) 00:15:45.056 [2024-11-28 07:25:07.220216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.056 [2024-11-28 07:25:07.220233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdcc0, cid 3, qid 0 00:15:45.056 [2024-11-28 07:25:07.220287] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.056 [2024-11-28 07:25:07.220299] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.056 [2024-11-28 07:25:07.220319] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.224360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bdcc0) on tqpair=0x871510 00:15:45.056 [2024-11-28 07:25:07.224381] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.224386] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.224391] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871510) 00:15:45.056 [2024-11-28 07:25:07.224400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.056 [2024-11-28 07:25:07.224424] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8bdcc0, cid 3, qid 0 00:15:45.056 [2024-11-28 07:25:07.224492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.056 [2024-11-28 07:25:07.224499] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.056 [2024-11-28 07:25:07.224503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.056 [2024-11-28 07:25:07.224508] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8bdcc0) on tqpair=0x871510 00:15:45.056 [2024-11-28 07:25:07.224517] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:15:45.056 00:15:45.056 07:25:07 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:45.056 [2024-11-28 07:25:07.261397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:45.056 [2024-11-28 07:25:07.261456] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80914 ] 00:15:45.321 [2024-11-28 07:25:07.402516] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:45.321 [2024-11-28 07:25:07.402602] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:45.321 [2024-11-28 07:25:07.402609] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:45.321 [2024-11-28 07:25:07.402625] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:45.321 [2024-11-28 07:25:07.402639] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:15:45.321 [2024-11-28 07:25:07.402815] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:45.321 [2024-11-28 07:25:07.402875] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b0c510 0 00:15:45.321 [2024-11-28 07:25:07.407401] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:45.321 [2024-11-28 07:25:07.407425] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:45.321 [2024-11-28 07:25:07.407431] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:45.321 [2024-11-28 07:25:07.407446] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:45.321 [2024-11-28 07:25:07.407495] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.407503] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.407507] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c510) 00:15:45.321 [2024-11-28 07:25:07.407522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:45.321 [2024-11-28 07:25:07.407555] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b588a0, cid 0, qid 0 00:15:45.321 [2024-11-28 07:25:07.415331] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.321 [2024-11-28 07:25:07.415352] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.321 [2024-11-28 07:25:07.415357] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415363] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b588a0) on tqpair=0x1b0c510 00:15:45.321 [2024-11-28 07:25:07.415375] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:45.321 [2024-11-28 07:25:07.415382] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:45.321 [2024-11-28 07:25:07.415389] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:45.321 [2024-11-28 07:25:07.415406] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415411] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c510) 00:15:45.321 [2024-11-28 07:25:07.415425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.321 [2024-11-28 07:25:07.415453] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b588a0, cid 0, qid 0 00:15:45.321 [2024-11-28 07:25:07.415522] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.321 [2024-11-28 07:25:07.415529] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.321 [2024-11-28 07:25:07.415533] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415537] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b588a0) on tqpair=0x1b0c510 00:15:45.321 [2024-11-28 07:25:07.415544] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:45.321 [2024-11-28 07:25:07.415552] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:45.321 [2024-11-28 07:25:07.415560] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415568] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c510) 00:15:45.321 [2024-11-28 07:25:07.415576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.321 [2024-11-28 07:25:07.415595] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b588a0, cid 0, qid 0 00:15:45.321 [2024-11-28 07:25:07.415650] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.321 [2024-11-28 07:25:07.415657] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.321 [2024-11-28 07:25:07.415661] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415665] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b588a0) on tqpair=0x1b0c510 00:15:45.321 [2024-11-28 07:25:07.415672] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:45.321 [2024-11-28 07:25:07.415681] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:45.321 [2024-11-28 07:25:07.415689] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415693] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415697] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c510) 00:15:45.321 [2024-11-28 07:25:07.415704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.321 [2024-11-28 07:25:07.415722] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b588a0, cid 0, qid 0 00:15:45.321 [2024-11-28 07:25:07.415784] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.321 [2024-11-28 07:25:07.415791] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.321 [2024-11-28 07:25:07.415795] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415799] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b588a0) on tqpair=0x1b0c510 00:15:45.321 [2024-11-28 07:25:07.415806] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:45.321 [2024-11-28 07:25:07.415816] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415820] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415824] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c510) 00:15:45.321 [2024-11-28 07:25:07.415832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.321 [2024-11-28 07:25:07.415849] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b588a0, cid 0, qid 0 00:15:45.321 [2024-11-28 07:25:07.415899] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.321 [2024-11-28 07:25:07.415906] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.321 [2024-11-28 07:25:07.415910] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.415914] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b588a0) on tqpair=0x1b0c510 00:15:45.321 [2024-11-28 07:25:07.415920] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:45.321 [2024-11-28 07:25:07.415925] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:45.321 [2024-11-28 07:25:07.415933] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:45.321 [2024-11-28 07:25:07.416040] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:45.321 [2024-11-28 07:25:07.416046] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:45.321 [2024-11-28 07:25:07.416056] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.416060] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.416064] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c510) 00:15:45.321 [2024-11-28 07:25:07.416072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.321 [2024-11-28 07:25:07.416093] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b588a0, cid 0, qid 0 00:15:45.321 [2024-11-28 07:25:07.416151] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.321 [2024-11-28 07:25:07.416158] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.321 [2024-11-28 07:25:07.416163] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.416167] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b588a0) on tqpair=0x1b0c510 00:15:45.321 [2024-11-28 07:25:07.416173] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:45.321 [2024-11-28 07:25:07.416184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.416188] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.416193] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c510) 00:15:45.321 [2024-11-28 07:25:07.416200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.321 [2024-11-28 07:25:07.416218] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b588a0, cid 0, qid 0 00:15:45.321 [2024-11-28 07:25:07.416269] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.321 [2024-11-28 07:25:07.416276] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.321 [2024-11-28 07:25:07.416280] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.321 [2024-11-28 07:25:07.416285] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b588a0) on tqpair=0x1b0c510 00:15:45.321 [2024-11-28 07:25:07.416291] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:45.321 [2024-11-28 07:25:07.416296] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:45.321 [2024-11-28 07:25:07.416305] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:45.322 [2024-11-28 07:25:07.416323] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:45.322 [2024-11-28 07:25:07.416347] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416352] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416356] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c510) 00:15:45.322 [2024-11-28 07:25:07.416364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.322 [2024-11-28 07:25:07.416387] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b588a0, cid 0, qid 0 00:15:45.322 [2024-11-28 07:25:07.416504] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.322 [2024-11-28 07:25:07.416511] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.322 [2024-11-28 07:25:07.416515] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416520] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c510): datao=0, datal=4096, cccid=0 00:15:45.322 [2024-11-28 07:25:07.416525] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b588a0) on tqpair(0x1b0c510): expected_datao=0, payload_size=4096 00:15:45.322 [2024-11-28 07:25:07.416535] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416540] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416549] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.322 [2024-11-28 07:25:07.416555] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.322 [2024-11-28 07:25:07.416559] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416563] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b588a0) on tqpair=0x1b0c510 00:15:45.322 [2024-11-28 07:25:07.416574] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:45.322 [2024-11-28 07:25:07.416580] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:45.322 [2024-11-28 07:25:07.416585] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:45.322 [2024-11-28 07:25:07.416590] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:45.322 [2024-11-28 07:25:07.416595] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:45.322 [2024-11-28 07:25:07.416601] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:45.322 [2024-11-28 07:25:07.416615] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:45.322 [2024-11-28 07:25:07.416624] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416629] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c510) 00:15:45.322 [2024-11-28 07:25:07.416641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:45.322 [2024-11-28 07:25:07.416662] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b588a0, cid 0, qid 0 00:15:45.322 [2024-11-28 07:25:07.416728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.322 [2024-11-28 07:25:07.416735] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.322 [2024-11-28 07:25:07.416739] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b588a0) on tqpair=0x1b0c510 00:15:45.322 [2024-11-28 07:25:07.416752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416756] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416760] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c510) 00:15:45.322 [2024-11-28 07:25:07.416767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.322 [2024-11-28 07:25:07.416776] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416783] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b0c510) 00:15:45.322 [2024-11-28 07:25:07.416789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.322 [2024-11-28 07:25:07.416796] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416799] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416803] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b0c510) 00:15:45.322 [2024-11-28 07:25:07.416809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.322 [2024-11-28 07:25:07.416815] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416819] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416822] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.322 [2024-11-28 07:25:07.416828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.322 [2024-11-28 07:25:07.416833] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:45.322 [2024-11-28 07:25:07.416846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:45.322 [2024-11-28 07:25:07.416854] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416858] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.416861] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c510) 00:15:45.322 [2024-11-28 07:25:07.416868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.322 [2024-11-28 07:25:07.416889] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b588a0, cid 0, qid 0 00:15:45.322 [2024-11-28 07:25:07.416896] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58a00, cid 1, qid 0 00:15:45.322 [2024-11-28 07:25:07.416901] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58b60, cid 2, qid 0 00:15:45.322 [2024-11-28 07:25:07.416906] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.322 [2024-11-28 07:25:07.416911] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58e20, cid 4, qid 0 00:15:45.322 [2024-11-28 07:25:07.417020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.322 [2024-11-28 07:25:07.417027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.322 [2024-11-28 07:25:07.417031] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.417035] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58e20) on tqpair=0x1b0c510 00:15:45.322 [2024-11-28 07:25:07.417041] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:45.322 [2024-11-28 07:25:07.417047] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:45.322 [2024-11-28 07:25:07.417056] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:45.322 [2024-11-28 07:25:07.417067] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:45.322 [2024-11-28 07:25:07.417075] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.417079] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.417083] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c510) 00:15:45.322 [2024-11-28 07:25:07.417090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:45.322 [2024-11-28 07:25:07.417109] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58e20, cid 4, qid 0 00:15:45.322 [2024-11-28 07:25:07.417169] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.322 [2024-11-28 07:25:07.417176] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.322 [2024-11-28 07:25:07.417180] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.417184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58e20) on tqpair=0x1b0c510 00:15:45.322 [2024-11-28 07:25:07.417245] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:45.322 [2024-11-28 07:25:07.417256] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:45.322 [2024-11-28 07:25:07.417264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.417268] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.417272] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c510) 00:15:45.322 [2024-11-28 07:25:07.417279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.322 [2024-11-28 07:25:07.417298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58e20, cid 4, qid 0 00:15:45.322 [2024-11-28 07:25:07.417396] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.322 [2024-11-28 07:25:07.417405] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.322 [2024-11-28 07:25:07.417409] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.417413] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c510): datao=0, datal=4096, cccid=4 00:15:45.322 [2024-11-28 07:25:07.417418] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b58e20) on tqpair(0x1b0c510): expected_datao=0, payload_size=4096 00:15:45.322 [2024-11-28 07:25:07.417427] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.417431] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.417440] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.322 [2024-11-28 07:25:07.417446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.322 [2024-11-28 07:25:07.417450] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.322 [2024-11-28 07:25:07.417454] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58e20) on tqpair=0x1b0c510 00:15:45.322 [2024-11-28 07:25:07.417472] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:45.322 [2024-11-28 07:25:07.417484] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:45.322 [2024-11-28 07:25:07.417496] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:45.323 [2024-11-28 07:25:07.417503] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417508] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417512] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c510) 00:15:45.323 [2024-11-28 07:25:07.417520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.323 [2024-11-28 07:25:07.417542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58e20, cid 4, qid 0 00:15:45.323 [2024-11-28 07:25:07.417635] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.323 [2024-11-28 07:25:07.417642] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.323 [2024-11-28 07:25:07.417646] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417650] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c510): datao=0, datal=4096, cccid=4 00:15:45.323 [2024-11-28 07:25:07.417656] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b58e20) on tqpair(0x1b0c510): expected_datao=0, payload_size=4096 00:15:45.323 [2024-11-28 07:25:07.417664] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417668] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417677] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.323 [2024-11-28 07:25:07.417683] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.323 [2024-11-28 07:25:07.417687] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417691] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58e20) on tqpair=0x1b0c510 00:15:45.323 [2024-11-28 07:25:07.417710] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:45.323 [2024-11-28 07:25:07.417721] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:45.323 [2024-11-28 07:25:07.417730] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417734] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417738] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c510) 00:15:45.323 [2024-11-28 07:25:07.417761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.323 [2024-11-28 07:25:07.417781] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58e20, cid 4, qid 0 00:15:45.323 [2024-11-28 07:25:07.417854] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.323 [2024-11-28 07:25:07.417861] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.323 [2024-11-28 07:25:07.417865] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417868] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c510): datao=0, datal=4096, cccid=4 00:15:45.323 [2024-11-28 07:25:07.417873] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b58e20) on tqpair(0x1b0c510): expected_datao=0, payload_size=4096 00:15:45.323 [2024-11-28 07:25:07.417881] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417885] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417893] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.323 [2024-11-28 07:25:07.417899] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.323 [2024-11-28 07:25:07.417903] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417907] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58e20) on tqpair=0x1b0c510 00:15:45.323 [2024-11-28 07:25:07.417916] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:45.323 [2024-11-28 07:25:07.417925] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:45.323 [2024-11-28 07:25:07.417936] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:45.323 [2024-11-28 07:25:07.417943] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:45.323 [2024-11-28 07:25:07.417949] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:45.323 [2024-11-28 07:25:07.417954] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:45.323 [2024-11-28 07:25:07.417959] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:45.323 [2024-11-28 07:25:07.417965] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:45.323 [2024-11-28 07:25:07.417984] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417989] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.417993] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c510) 00:15:45.323 [2024-11-28 07:25:07.418001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.323 [2024-11-28 07:25:07.418008] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418012] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418016] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b0c510) 00:15:45.323 [2024-11-28 07:25:07.418023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:45.323 [2024-11-28 07:25:07.418047] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58e20, cid 4, qid 0 00:15:45.323 [2024-11-28 07:25:07.418054] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58f80, cid 5, qid 0 00:15:45.323 [2024-11-28 07:25:07.418129] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.323 [2024-11-28 07:25:07.418136] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.323 [2024-11-28 07:25:07.418140] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418144] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58e20) on tqpair=0x1b0c510 00:15:45.323 [2024-11-28 07:25:07.418152] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.323 [2024-11-28 07:25:07.418158] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.323 [2024-11-28 07:25:07.418161] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418165] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58f80) on tqpair=0x1b0c510 00:15:45.323 [2024-11-28 07:25:07.418177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418185] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b0c510) 00:15:45.323 [2024-11-28 07:25:07.418192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.323 [2024-11-28 07:25:07.418210] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58f80, cid 5, qid 0 00:15:45.323 [2024-11-28 07:25:07.418275] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.323 [2024-11-28 07:25:07.418281] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.323 [2024-11-28 07:25:07.418285] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418289] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58f80) on tqpair=0x1b0c510 00:15:45.323 [2024-11-28 07:25:07.418300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418305] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418309] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b0c510) 00:15:45.323 [2024-11-28 07:25:07.418317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.323 [2024-11-28 07:25:07.418334] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58f80, cid 5, qid 0 00:15:45.323 [2024-11-28 07:25:07.418447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.323 [2024-11-28 07:25:07.418457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.323 [2024-11-28 07:25:07.418461] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418465] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58f80) on tqpair=0x1b0c510 00:15:45.323 [2024-11-28 07:25:07.418477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418481] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418485] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b0c510) 00:15:45.323 [2024-11-28 07:25:07.418493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.323 [2024-11-28 07:25:07.418516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58f80, cid 5, qid 0 00:15:45.323 [2024-11-28 07:25:07.418569] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.323 [2024-11-28 07:25:07.418576] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.323 [2024-11-28 07:25:07.418579] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418583] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58f80) on tqpair=0x1b0c510 00:15:45.323 [2024-11-28 07:25:07.418598] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418603] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418607] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b0c510) 00:15:45.323 [2024-11-28 07:25:07.418614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.323 [2024-11-28 07:25:07.418622] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418626] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418630] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c510) 00:15:45.323 [2024-11-28 07:25:07.418636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.323 [2024-11-28 07:25:07.418644] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418648] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418652] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b0c510) 00:15:45.323 [2024-11-28 07:25:07.418658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.323 [2024-11-28 07:25:07.418666] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.323 [2024-11-28 07:25:07.418670] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.418674] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b0c510) 00:15:45.324 [2024-11-28 07:25:07.418680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.324 [2024-11-28 07:25:07.418700] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58f80, cid 5, qid 0 00:15:45.324 [2024-11-28 07:25:07.418707] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58e20, cid 4, qid 0 00:15:45.324 [2024-11-28 07:25:07.418712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b590e0, cid 6, qid 0 00:15:45.324 [2024-11-28 07:25:07.418716] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b59240, cid 7, qid 0 00:15:45.324 [2024-11-28 07:25:07.418876] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.324 [2024-11-28 07:25:07.418883] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.324 [2024-11-28 07:25:07.418887] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.418890] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c510): datao=0, datal=8192, cccid=5 00:15:45.324 [2024-11-28 07:25:07.418895] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b58f80) on tqpair(0x1b0c510): expected_datao=0, payload_size=8192 00:15:45.324 [2024-11-28 07:25:07.418914] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.418919] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.418925] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.324 [2024-11-28 07:25:07.418931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.324 [2024-11-28 07:25:07.418935] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.418938] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c510): datao=0, datal=512, cccid=4 00:15:45.324 [2024-11-28 07:25:07.418943] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b58e20) on tqpair(0x1b0c510): expected_datao=0, payload_size=512 00:15:45.324 [2024-11-28 07:25:07.418951] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.418954] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.418960] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.324 [2024-11-28 07:25:07.418966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.324 [2024-11-28 07:25:07.418969] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.418973] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c510): datao=0, datal=512, cccid=6 00:15:45.324 [2024-11-28 07:25:07.418977] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b590e0) on tqpair(0x1b0c510): expected_datao=0, payload_size=512 00:15:45.324 [2024-11-28 07:25:07.418985] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.418988] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.418994] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:45.324 [2024-11-28 07:25:07.419000] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:45.324 [2024-11-28 07:25:07.419003] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.419007] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c510): datao=0, datal=4096, cccid=7 00:15:45.324 [2024-11-28 07:25:07.419012] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b59240) on tqpair(0x1b0c510): expected_datao=0, payload_size=4096 00:15:45.324 [2024-11-28 07:25:07.419019] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.419023] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.419031] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.324 [2024-11-28 07:25:07.419037] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.324 [2024-11-28 07:25:07.419041] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.419045] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58f80) on tqpair=0x1b0c510 00:15:45.324 [2024-11-28 07:25:07.419063] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.324 [2024-11-28 07:25:07.419070] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.324 [2024-11-28 07:25:07.419073] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.419077] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58e20) on tqpair=0x1b0c510 00:15:45.324 [2024-11-28 07:25:07.419091] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.324 [2024-11-28 07:25:07.419099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.324 [2024-11-28 07:25:07.419102] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.324 [2024-11-28 07:25:07.419106] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b590e0) on tqpair=0x1b0c510 00:15:45.324 [2024-11-28 07:25:07.419114] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.324 ===================================================== 00:15:45.324 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:45.324 ===================================================== 00:15:45.324 Controller Capabilities/Features 00:15:45.324 ================================ 00:15:45.324 Vendor ID: 8086 00:15:45.324 Subsystem Vendor ID: 8086 00:15:45.324 Serial Number: SPDK00000000000001 00:15:45.324 Model Number: SPDK bdev Controller 00:15:45.324 Firmware Version: 24.01.1 00:15:45.324 Recommended Arb Burst: 6 00:15:45.324 IEEE OUI Identifier: e4 d2 5c 00:15:45.324 Multi-path I/O 00:15:45.324 May have multiple subsystem ports: Yes 00:15:45.324 May have multiple controllers: Yes 00:15:45.324 Associated with SR-IOV VF: No 00:15:45.324 Max Data Transfer Size: 131072 00:15:45.324 Max Number of Namespaces: 32 00:15:45.324 Max Number of I/O Queues: 127 00:15:45.324 NVMe Specification Version (VS): 1.3 00:15:45.324 NVMe Specification Version (Identify): 1.3 00:15:45.324 Maximum Queue Entries: 128 00:15:45.324 Contiguous Queues Required: Yes 00:15:45.324 Arbitration Mechanisms Supported 00:15:45.324 Weighted Round Robin: Not Supported 00:15:45.324 Vendor Specific: Not Supported 00:15:45.324 Reset Timeout: 15000 ms 00:15:45.324 Doorbell Stride: 4 bytes 00:15:45.324 NVM Subsystem Reset: Not Supported 00:15:45.324 Command Sets Supported 00:15:45.324 NVM Command Set: Supported 00:15:45.324 Boot Partition: Not Supported 00:15:45.324 Memory Page Size Minimum: 4096 bytes 00:15:45.324 Memory Page Size Maximum: 4096 bytes 00:15:45.324 Persistent Memory Region: Not Supported 00:15:45.324 Optional Asynchronous Events Supported 00:15:45.324 Namespace Attribute Notices: Supported 00:15:45.324 Firmware Activation Notices: Not Supported 00:15:45.324 ANA Change Notices: Not Supported 00:15:45.324 PLE Aggregate Log Change Notices: Not Supported 00:15:45.324 LBA Status Info Alert Notices: Not Supported 00:15:45.324 EGE Aggregate Log Change Notices: Not Supported 00:15:45.324 Normal NVM Subsystem Shutdown event: Not Supported 00:15:45.324 Zone Descriptor Change Notices: Not Supported 00:15:45.324 Discovery Log Change Notices: Not Supported 00:15:45.324 Controller Attributes 00:15:45.324 128-bit Host Identifier: Supported 00:15:45.324 Non-Operational Permissive Mode: Not Supported 00:15:45.324 NVM Sets: Not Supported 00:15:45.324 Read Recovery Levels: Not Supported 00:15:45.324 Endurance Groups: Not Supported 00:15:45.324 Predictable Latency Mode: Not Supported 00:15:45.324 Traffic Based Keep ALive: Not Supported 00:15:45.324 Namespace Granularity: Not Supported 00:15:45.324 SQ Associations: Not Supported 00:15:45.324 UUID List: Not Supported 00:15:45.324 Multi-Domain Subsystem: Not Supported 00:15:45.324 Fixed Capacity Management: Not Supported 00:15:45.324 Variable Capacity Management: Not Supported 00:15:45.324 Delete Endurance Group: Not Supported 00:15:45.324 Delete NVM Set: Not Supported 00:15:45.324 Extended LBA Formats Supported: Not Supported 00:15:45.324 Flexible Data Placement Supported: Not Supported 00:15:45.324 00:15:45.324 Controller Memory Buffer Support 00:15:45.324 ================================ 00:15:45.324 Supported: No 00:15:45.324 00:15:45.324 Persistent Memory Region Support 00:15:45.324 ================================ 00:15:45.324 Supported: No 00:15:45.324 00:15:45.324 Admin Command Set Attributes 00:15:45.324 ============================ 00:15:45.324 Security Send/Receive: Not Supported 00:15:45.324 Format NVM: Not Supported 00:15:45.324 Firmware Activate/Download: Not Supported 00:15:45.324 Namespace Management: Not Supported 00:15:45.324 Device Self-Test: Not Supported 00:15:45.324 Directives: Not Supported 00:15:45.324 NVMe-MI: Not Supported 00:15:45.324 Virtualization Management: Not Supported 00:15:45.324 Doorbell Buffer Config: Not Supported 00:15:45.324 Get LBA Status Capability: Not Supported 00:15:45.324 Command & Feature Lockdown Capability: Not Supported 00:15:45.324 Abort Command Limit: 4 00:15:45.324 Async Event Request Limit: 4 00:15:45.324 Number of Firmware Slots: N/A 00:15:45.324 Firmware Slot 1 Read-Only: N/A 00:15:45.324 Firmware Activation Without Reset: N/A 00:15:45.324 Multiple Update Detection Support: N/A 00:15:45.324 Firmware Update Granularity: No Information Provided 00:15:45.324 Per-Namespace SMART Log: No 00:15:45.324 Asymmetric Namespace Access Log Page: Not Supported 00:15:45.324 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:45.324 Command Effects Log Page: Supported 00:15:45.324 Get Log Page Extended Data: Supported 00:15:45.324 Telemetry Log Pages: Not Supported 00:15:45.324 Persistent Event Log Pages: Not Supported 00:15:45.324 Supported Log Pages Log Page: May Support 00:15:45.324 Commands Supported & Effects Log Page: Not Supported 00:15:45.324 Feature Identifiers & Effects Log Page:May Support 00:15:45.324 NVMe-MI Commands & Effects Log Page: May Support 00:15:45.324 Data Area 4 for Telemetry Log: Not Supported 00:15:45.324 Error Log Page Entries Supported: 128 00:15:45.325 Keep Alive: Supported 00:15:45.325 Keep Alive Granularity: 10000 ms 00:15:45.325 00:15:45.325 NVM Command Set Attributes 00:15:45.325 ========================== 00:15:45.325 Submission Queue Entry Size 00:15:45.325 Max: 64 00:15:45.325 Min: 64 00:15:45.325 Completion Queue Entry Size 00:15:45.325 Max: 16 00:15:45.325 Min: 16 00:15:45.325 Number of Namespaces: 32 00:15:45.325 Compare Command: Supported 00:15:45.325 Write Uncorrectable Command: Not Supported 00:15:45.325 Dataset Management Command: Supported 00:15:45.325 Write Zeroes Command: Supported 00:15:45.325 Set Features Save Field: Not Supported 00:15:45.325 Reservations: Supported 00:15:45.325 Timestamp: Not Supported 00:15:45.325 Copy: Supported 00:15:45.325 Volatile Write Cache: Present 00:15:45.325 Atomic Write Unit (Normal): 1 00:15:45.325 Atomic Write Unit (PFail): 1 00:15:45.325 Atomic Compare & Write Unit: 1 00:15:45.325 Fused Compare & Write: Supported 00:15:45.325 Scatter-Gather List 00:15:45.325 SGL Command Set: Supported 00:15:45.325 SGL Keyed: Supported 00:15:45.325 SGL Bit Bucket Descriptor: Not Supported 00:15:45.325 SGL Metadata Pointer: Not Supported 00:15:45.325 Oversized SGL: Not Supported 00:15:45.325 SGL Metadata Address: Not Supported 00:15:45.325 SGL Offset: Supported 00:15:45.325 Transport SGL Data Block: Not Supported 00:15:45.325 Replay Protected Memory Block: Not Supported 00:15:45.325 00:15:45.325 Firmware Slot Information 00:15:45.325 ========================= 00:15:45.325 Active slot: 1 00:15:45.325 Slot 1 Firmware Revision: 24.01.1 00:15:45.325 00:15:45.325 00:15:45.325 Commands Supported and Effects 00:15:45.325 ============================== 00:15:45.325 Admin Commands 00:15:45.325 -------------- 00:15:45.325 Get Log Page (02h): Supported 00:15:45.325 Identify (06h): Supported 00:15:45.325 Abort (08h): Supported 00:15:45.325 Set Features (09h): Supported 00:15:45.325 Get Features (0Ah): Supported 00:15:45.325 Asynchronous Event Request (0Ch): Supported 00:15:45.325 Keep Alive (18h): Supported 00:15:45.325 I/O Commands 00:15:45.325 ------------ 00:15:45.325 Flush (00h): Supported LBA-Change 00:15:45.325 Write (01h): Supported LBA-Change 00:15:45.325 Read (02h): Supported 00:15:45.325 Compare (05h): Supported 00:15:45.325 Write Zeroes (08h): Supported LBA-Change 00:15:45.325 Dataset Management (09h): Supported LBA-Change 00:15:45.325 Copy (19h): Supported LBA-Change 00:15:45.325 Unknown (79h): Supported LBA-Change 00:15:45.325 Unknown (7Ah): Supported 00:15:45.325 00:15:45.325 Error Log 00:15:45.325 ========= 00:15:45.325 00:15:45.325 Arbitration 00:15:45.325 =========== 00:15:45.325 Arbitration Burst: 1 00:15:45.325 00:15:45.325 Power Management 00:15:45.325 ================ 00:15:45.325 Number of Power States: 1 00:15:45.325 Current Power State: Power State #0 00:15:45.325 Power State #0: 00:15:45.325 Max Power: 0.00 W 00:15:45.325 Non-Operational State: Operational 00:15:45.325 Entry Latency: Not Reported 00:15:45.325 Exit Latency: Not Reported 00:15:45.325 Relative Read Throughput: 0 00:15:45.325 Relative Read Latency: 0 00:15:45.325 Relative Write Throughput: 0 00:15:45.325 Relative Write Latency: 0 00:15:45.325 Idle Power: Not Reported 00:15:45.325 Active Power: Not Reported 00:15:45.325 Non-Operational Permissive Mode: Not Supported 00:15:45.325 00:15:45.325 Health Information 00:15:45.325 ================== 00:15:45.325 Critical Warnings: 00:15:45.325 Available Spare Space: OK 00:15:45.325 Temperature: OK 00:15:45.325 Device Reliability: OK 00:15:45.325 Read Only: No 00:15:45.325 Volatile Memory Backup: OK 00:15:45.325 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:45.325 Temperature Threshold: [2024-11-28 07:25:07.419120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.325 [2024-11-28 07:25:07.419125] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.419129] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b59240) on tqpair=0x1b0c510 00:15:45.325 [2024-11-28 07:25:07.419248] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.419255] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.419259] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b0c510) 00:15:45.325 [2024-11-28 07:25:07.419266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.325 [2024-11-28 07:25:07.419288] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b59240, cid 7, qid 0 00:15:45.325 [2024-11-28 07:25:07.422383] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.325 [2024-11-28 07:25:07.422403] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.325 [2024-11-28 07:25:07.422409] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.422413] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b59240) on tqpair=0x1b0c510 00:15:45.325 [2024-11-28 07:25:07.422454] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:45.325 [2024-11-28 07:25:07.422470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.325 [2024-11-28 07:25:07.422478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.325 [2024-11-28 07:25:07.422485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.325 [2024-11-28 07:25:07.422492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:45.325 [2024-11-28 07:25:07.422502] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.422507] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.422511] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.325 [2024-11-28 07:25:07.422520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.325 [2024-11-28 07:25:07.422547] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.325 [2024-11-28 07:25:07.422602] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.325 [2024-11-28 07:25:07.422609] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.325 [2024-11-28 07:25:07.422613] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.422617] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.325 [2024-11-28 07:25:07.422627] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.422631] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.422635] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.325 [2024-11-28 07:25:07.422643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.325 [2024-11-28 07:25:07.422672] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.325 [2024-11-28 07:25:07.422779] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.325 [2024-11-28 07:25:07.422786] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.325 [2024-11-28 07:25:07.422789] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.422793] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.325 [2024-11-28 07:25:07.422800] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:45.325 [2024-11-28 07:25:07.422805] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:45.325 [2024-11-28 07:25:07.422815] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.422820] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.325 [2024-11-28 07:25:07.422824] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.422831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.422848] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.422904] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.422911] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.422914] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.422918] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.422930] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.422935] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.422939] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.422946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.422963] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.423019] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.423026] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.423029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.423044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423053] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.423060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.423077] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.423133] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.423140] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.423144] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423148] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.423159] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423163] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423167] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.423175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.423192] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.423255] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.423262] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.423266] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423270] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.423281] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423285] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423289] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.423296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.423314] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.423380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.423388] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.423392] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423396] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.423408] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423412] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.423424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.423443] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.423498] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.423505] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.423508] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423512] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.423523] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423528] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423532] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.423539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.423556] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.423612] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.423619] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.423623] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423627] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.423638] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423642] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423646] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.423653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.423671] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.423739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.423746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.423750] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423754] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.423765] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423769] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423773] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.423780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.423797] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.423853] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.423860] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.423864] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423868] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.423879] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423883] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.423887] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.423894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.423912] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.424001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.424009] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.424013] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.424017] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.424028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.424033] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.424037] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.424045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.424064] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.424117] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.424124] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.424128] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.424132] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.424144] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.424148] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.424152] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.326 [2024-11-28 07:25:07.424160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.326 [2024-11-28 07:25:07.424177] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.326 [2024-11-28 07:25:07.424237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.326 [2024-11-28 07:25:07.424246] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.326 [2024-11-28 07:25:07.424250] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.326 [2024-11-28 07:25:07.424254] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.326 [2024-11-28 07:25:07.424266] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424270] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424275] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.424282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.424315] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.424393] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.424408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.424412] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424416] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.327 [2024-11-28 07:25:07.424429] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424433] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424437] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.424446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.424466] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.424531] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.424538] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.424541] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424545] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.327 [2024-11-28 07:25:07.424557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424561] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424565] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.424572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.424589] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.424647] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.424654] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.424658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424662] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.327 [2024-11-28 07:25:07.424673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424677] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424681] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.424688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.424706] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.424762] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.424768] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.424772] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424776] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.327 [2024-11-28 07:25:07.424787] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424791] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.424802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.424819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.424878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.424885] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.424889] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424893] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.327 [2024-11-28 07:25:07.424904] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424908] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.424912] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.424919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.424936] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.425001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.425013] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.425018] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.327 [2024-11-28 07:25:07.425034] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425039] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425042] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.425050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.425068] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.425120] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.425131] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.425135] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425139] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.327 [2024-11-28 07:25:07.425151] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425156] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425159] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.425167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.425185] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.425239] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.425246] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.425250] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425254] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.327 [2024-11-28 07:25:07.425265] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.425280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.425297] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.425363] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.425371] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.425375] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425379] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.327 [2024-11-28 07:25:07.425390] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425395] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425399] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.425406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.425425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.425475] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.425482] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.425485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.327 [2024-11-28 07:25:07.425501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425505] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425509] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.425516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.425533] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.425591] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.425598] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.425602] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425606] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.327 [2024-11-28 07:25:07.425617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425621] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.327 [2024-11-28 07:25:07.425625] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.327 [2024-11-28 07:25:07.425632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.327 [2024-11-28 07:25:07.425649] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.327 [2024-11-28 07:25:07.425710] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.327 [2024-11-28 07:25:07.425717] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.327 [2024-11-28 07:25:07.425721] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.425725] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.328 [2024-11-28 07:25:07.425736] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.425740] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.425744] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.328 [2024-11-28 07:25:07.425751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.328 [2024-11-28 07:25:07.425768] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.328 [2024-11-28 07:25:07.425825] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.328 [2024-11-28 07:25:07.425832] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.328 [2024-11-28 07:25:07.425835] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.425839] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.328 [2024-11-28 07:25:07.425850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.425855] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.425859] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.328 [2024-11-28 07:25:07.425866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.328 [2024-11-28 07:25:07.425883] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.328 [2024-11-28 07:25:07.425940] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.328 [2024-11-28 07:25:07.425947] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.328 [2024-11-28 07:25:07.425951] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.425955] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.328 [2024-11-28 07:25:07.425966] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.425970] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.425974] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.328 [2024-11-28 07:25:07.425981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.328 [2024-11-28 07:25:07.425998] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.328 [2024-11-28 07:25:07.426052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.328 [2024-11-28 07:25:07.426059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.328 [2024-11-28 07:25:07.426063] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.426076] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.328 [2024-11-28 07:25:07.426087] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.426091] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.426095] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.328 [2024-11-28 07:25:07.426102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.328 [2024-11-28 07:25:07.426119] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.328 [2024-11-28 07:25:07.426171] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.328 [2024-11-28 07:25:07.426178] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.328 [2024-11-28 07:25:07.426182] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.426186] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.328 [2024-11-28 07:25:07.426197] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.426201] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.426205] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.328 [2024-11-28 07:25:07.426212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.328 [2024-11-28 07:25:07.426229] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.328 [2024-11-28 07:25:07.426286] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.328 [2024-11-28 07:25:07.426293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.328 [2024-11-28 07:25:07.426297] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.426301] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.328 [2024-11-28 07:25:07.430328] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.430345] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.430350] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c510) 00:15:45.328 [2024-11-28 07:25:07.430359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:45.328 [2024-11-28 07:25:07.430384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b58cc0, cid 3, qid 0 00:15:45.328 [2024-11-28 07:25:07.430451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:45.328 [2024-11-28 07:25:07.430458] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:45.328 [2024-11-28 07:25:07.430462] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:45.328 [2024-11-28 07:25:07.430466] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b58cc0) on tqpair=0x1b0c510 00:15:45.328 [2024-11-28 07:25:07.430475] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:15:45.328 0 Kelvin (-273 Celsius) 00:15:45.328 Available Spare: 0% 00:15:45.328 Available Spare Threshold: 0% 00:15:45.328 Life Percentage Used: 0% 00:15:45.328 Data Units Read: 0 00:15:45.328 Data Units Written: 0 00:15:45.328 Host Read Commands: 0 00:15:45.328 Host Write Commands: 0 00:15:45.328 Controller Busy Time: 0 minutes 00:15:45.328 Power Cycles: 0 00:15:45.328 Power On Hours: 0 hours 00:15:45.328 Unsafe Shutdowns: 0 00:15:45.328 Unrecoverable Media Errors: 0 00:15:45.328 Lifetime Error Log Entries: 0 00:15:45.328 Warning Temperature Time: 0 minutes 00:15:45.328 Critical Temperature Time: 0 minutes 00:15:45.328 00:15:45.328 Number of Queues 00:15:45.328 ================ 00:15:45.328 Number of I/O Submission Queues: 127 00:15:45.328 Number of I/O Completion Queues: 127 00:15:45.328 00:15:45.328 Active Namespaces 00:15:45.328 ================= 00:15:45.328 Namespace ID:1 00:15:45.328 Error Recovery Timeout: Unlimited 00:15:45.328 Command Set Identifier: NVM (00h) 00:15:45.328 Deallocate: Supported 00:15:45.328 Deallocated/Unwritten Error: Not Supported 00:15:45.328 Deallocated Read Value: Unknown 00:15:45.328 Deallocate in Write Zeroes: Not Supported 00:15:45.328 Deallocated Guard Field: 0xFFFF 00:15:45.328 Flush: Supported 00:15:45.328 Reservation: Supported 00:15:45.328 Namespace Sharing Capabilities: Multiple Controllers 00:15:45.328 Size (in LBAs): 131072 (0GiB) 00:15:45.328 Capacity (in LBAs): 131072 (0GiB) 00:15:45.328 Utilization (in LBAs): 131072 (0GiB) 00:15:45.328 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:45.328 EUI64: ABCDEF0123456789 00:15:45.328 UUID: 299d91af-90ba-4197-bdba-b4063ad98d9b 00:15:45.328 Thin Provisioning: Not Supported 00:15:45.328 Per-NS Atomic Units: Yes 00:15:45.328 Atomic Boundary Size (Normal): 0 00:15:45.328 Atomic Boundary Size (PFail): 0 00:15:45.328 Atomic Boundary Offset: 0 00:15:45.328 Maximum Single Source Range Length: 65535 00:15:45.328 Maximum Copy Length: 65535 00:15:45.328 Maximum Source Range Count: 1 00:15:45.328 NGUID/EUI64 Never Reused: No 00:15:45.328 Namespace Write Protected: No 00:15:45.328 Number of LBA Formats: 1 00:15:45.328 Current LBA Format: LBA Format #00 00:15:45.328 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:45.328 00:15:45.328 07:25:07 -- host/identify.sh@51 -- # sync 00:15:45.328 07:25:07 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.328 07:25:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.328 07:25:07 -- common/autotest_common.sh@10 -- # set +x 00:15:45.328 07:25:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.328 07:25:07 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:45.328 07:25:07 -- host/identify.sh@56 -- # nvmftestfini 00:15:45.328 07:25:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:45.328 07:25:07 -- nvmf/common.sh@116 -- # sync 00:15:45.328 07:25:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:45.328 07:25:07 -- nvmf/common.sh@119 -- # set +e 00:15:45.328 07:25:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:45.328 07:25:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:45.328 rmmod nvme_tcp 00:15:45.328 rmmod nvme_fabrics 00:15:45.328 rmmod nvme_keyring 00:15:45.328 07:25:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:45.328 07:25:07 -- nvmf/common.sh@123 -- # set -e 00:15:45.328 07:25:07 -- nvmf/common.sh@124 -- # return 0 00:15:45.328 07:25:07 -- nvmf/common.sh@477 -- # '[' -n 80872 ']' 00:15:45.328 07:25:07 -- nvmf/common.sh@478 -- # killprocess 80872 00:15:45.328 07:25:07 -- common/autotest_common.sh@936 -- # '[' -z 80872 ']' 00:15:45.328 07:25:07 -- common/autotest_common.sh@940 -- # kill -0 80872 00:15:45.328 07:25:07 -- common/autotest_common.sh@941 -- # uname 00:15:45.328 07:25:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:45.328 07:25:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80872 00:15:45.587 07:25:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:45.587 07:25:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:45.587 killing process with pid 80872 00:15:45.587 07:25:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80872' 00:15:45.587 07:25:07 -- common/autotest_common.sh@955 -- # kill 80872 00:15:45.587 [2024-11-28 07:25:07.594740] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:45.587 07:25:07 -- common/autotest_common.sh@960 -- # wait 80872 00:15:45.847 07:25:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:45.847 07:25:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:45.847 07:25:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:45.847 07:25:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:45.847 07:25:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:45.847 07:25:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.847 07:25:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.847 07:25:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.847 07:25:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:45.847 ************************************ 00:15:45.847 END TEST nvmf_identify 00:15:45.847 00:15:45.847 real 0m2.666s 00:15:45.847 user 0m7.323s 00:15:45.847 sys 0m0.694s 00:15:45.847 07:25:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:45.847 07:25:07 -- common/autotest_common.sh@10 -- # set +x 00:15:45.847 ************************************ 00:15:45.847 07:25:07 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:45.847 07:25:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:45.847 07:25:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:45.847 07:25:07 -- common/autotest_common.sh@10 -- # set +x 00:15:45.847 ************************************ 00:15:45.847 START TEST nvmf_perf 00:15:45.847 ************************************ 00:15:45.847 07:25:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:45.847 * Looking for test storage... 00:15:45.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:45.847 07:25:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:45.847 07:25:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:45.847 07:25:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:46.107 07:25:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:46.107 07:25:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:46.107 07:25:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:46.107 07:25:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:46.107 07:25:08 -- scripts/common.sh@335 -- # IFS=.-: 00:15:46.107 07:25:08 -- scripts/common.sh@335 -- # read -ra ver1 00:15:46.107 07:25:08 -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.107 07:25:08 -- scripts/common.sh@336 -- # read -ra ver2 00:15:46.107 07:25:08 -- scripts/common.sh@337 -- # local 'op=<' 00:15:46.107 07:25:08 -- scripts/common.sh@339 -- # ver1_l=2 00:15:46.107 07:25:08 -- scripts/common.sh@340 -- # ver2_l=1 00:15:46.107 07:25:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:46.107 07:25:08 -- scripts/common.sh@343 -- # case "$op" in 00:15:46.107 07:25:08 -- scripts/common.sh@344 -- # : 1 00:15:46.107 07:25:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:46.107 07:25:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.107 07:25:08 -- scripts/common.sh@364 -- # decimal 1 00:15:46.107 07:25:08 -- scripts/common.sh@352 -- # local d=1 00:15:46.107 07:25:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.107 07:25:08 -- scripts/common.sh@354 -- # echo 1 00:15:46.107 07:25:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:46.107 07:25:08 -- scripts/common.sh@365 -- # decimal 2 00:15:46.107 07:25:08 -- scripts/common.sh@352 -- # local d=2 00:15:46.107 07:25:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.107 07:25:08 -- scripts/common.sh@354 -- # echo 2 00:15:46.107 07:25:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:46.107 07:25:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:46.107 07:25:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:46.107 07:25:08 -- scripts/common.sh@367 -- # return 0 00:15:46.107 07:25:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.107 07:25:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:46.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.107 --rc genhtml_branch_coverage=1 00:15:46.107 --rc genhtml_function_coverage=1 00:15:46.107 --rc genhtml_legend=1 00:15:46.107 --rc geninfo_all_blocks=1 00:15:46.107 --rc geninfo_unexecuted_blocks=1 00:15:46.107 00:15:46.107 ' 00:15:46.107 07:25:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:46.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.107 --rc genhtml_branch_coverage=1 00:15:46.107 --rc genhtml_function_coverage=1 00:15:46.107 --rc genhtml_legend=1 00:15:46.107 --rc geninfo_all_blocks=1 00:15:46.107 --rc geninfo_unexecuted_blocks=1 00:15:46.108 00:15:46.108 ' 00:15:46.108 07:25:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:46.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.108 --rc genhtml_branch_coverage=1 00:15:46.108 --rc genhtml_function_coverage=1 00:15:46.108 --rc genhtml_legend=1 00:15:46.108 --rc geninfo_all_blocks=1 00:15:46.108 --rc geninfo_unexecuted_blocks=1 00:15:46.108 00:15:46.108 ' 00:15:46.108 07:25:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:46.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.108 --rc genhtml_branch_coverage=1 00:15:46.108 --rc genhtml_function_coverage=1 00:15:46.108 --rc genhtml_legend=1 00:15:46.108 --rc geninfo_all_blocks=1 00:15:46.108 --rc geninfo_unexecuted_blocks=1 00:15:46.108 00:15:46.108 ' 00:15:46.108 07:25:08 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:46.108 07:25:08 -- nvmf/common.sh@7 -- # uname -s 00:15:46.108 07:25:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.108 07:25:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.108 07:25:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.108 07:25:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.108 07:25:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.108 07:25:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.108 07:25:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.108 07:25:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.108 07:25:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.108 07:25:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.108 07:25:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:15:46.108 07:25:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:15:46.108 07:25:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.108 07:25:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.108 07:25:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:46.108 07:25:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:46.108 07:25:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.108 07:25:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.108 07:25:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.108 07:25:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.108 07:25:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.108 07:25:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.108 07:25:08 -- paths/export.sh@5 -- # export PATH 00:15:46.108 07:25:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.108 07:25:08 -- nvmf/common.sh@46 -- # : 0 00:15:46.108 07:25:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:46.108 07:25:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:46.108 07:25:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:46.108 07:25:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.108 07:25:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.108 07:25:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:46.108 07:25:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:46.108 07:25:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:46.108 07:25:08 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:46.108 07:25:08 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:46.108 07:25:08 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:46.108 07:25:08 -- host/perf.sh@17 -- # nvmftestinit 00:15:46.108 07:25:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:46.108 07:25:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.108 07:25:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:46.108 07:25:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:46.108 07:25:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:46.108 07:25:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.108 07:25:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.108 07:25:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.108 07:25:08 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:46.108 07:25:08 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:46.108 07:25:08 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:46.108 07:25:08 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:46.108 07:25:08 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:46.108 07:25:08 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:46.108 07:25:08 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.108 07:25:08 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.108 07:25:08 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:46.108 07:25:08 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:46.108 07:25:08 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:46.108 07:25:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:46.108 07:25:08 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:46.108 07:25:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.108 07:25:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:46.108 07:25:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:46.108 07:25:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:46.108 07:25:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:46.108 07:25:08 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:46.108 07:25:08 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:46.108 Cannot find device "nvmf_tgt_br" 00:15:46.108 07:25:08 -- nvmf/common.sh@154 -- # true 00:15:46.108 07:25:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:46.108 Cannot find device "nvmf_tgt_br2" 00:15:46.108 07:25:08 -- nvmf/common.sh@155 -- # true 00:15:46.108 07:25:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:46.108 07:25:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:46.108 Cannot find device "nvmf_tgt_br" 00:15:46.108 07:25:08 -- nvmf/common.sh@157 -- # true 00:15:46.108 07:25:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:46.108 Cannot find device "nvmf_tgt_br2" 00:15:46.108 07:25:08 -- nvmf/common.sh@158 -- # true 00:15:46.109 07:25:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:46.109 07:25:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:46.109 07:25:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:46.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.109 07:25:08 -- nvmf/common.sh@161 -- # true 00:15:46.109 07:25:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:46.109 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:46.109 07:25:08 -- nvmf/common.sh@162 -- # true 00:15:46.109 07:25:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:46.109 07:25:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:46.109 07:25:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:46.109 07:25:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:46.368 07:25:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:46.368 07:25:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:46.368 07:25:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:46.368 07:25:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:46.368 07:25:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:46.368 07:25:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:46.368 07:25:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:46.368 07:25:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:46.368 07:25:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:46.368 07:25:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:46.368 07:25:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:46.368 07:25:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:46.368 07:25:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:46.368 07:25:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:46.368 07:25:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:46.368 07:25:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:46.368 07:25:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:46.368 07:25:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:46.368 07:25:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:46.368 07:25:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:46.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:46.368 00:15:46.368 --- 10.0.0.2 ping statistics --- 00:15:46.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.368 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:46.368 07:25:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:46.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:46.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:46.368 00:15:46.368 --- 10.0.0.3 ping statistics --- 00:15:46.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.368 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:46.368 07:25:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:46.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:15:46.368 00:15:46.368 --- 10.0.0.1 ping statistics --- 00:15:46.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.368 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:46.368 07:25:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.368 07:25:08 -- nvmf/common.sh@421 -- # return 0 00:15:46.368 07:25:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:46.368 07:25:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.368 07:25:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:46.368 07:25:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:46.368 07:25:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.368 07:25:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:46.368 07:25:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:46.368 07:25:08 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:46.368 07:25:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:46.368 07:25:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:46.368 07:25:08 -- common/autotest_common.sh@10 -- # set +x 00:15:46.368 07:25:08 -- nvmf/common.sh@469 -- # nvmfpid=81090 00:15:46.368 07:25:08 -- nvmf/common.sh@470 -- # waitforlisten 81090 00:15:46.368 07:25:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:46.368 07:25:08 -- common/autotest_common.sh@829 -- # '[' -z 81090 ']' 00:15:46.368 07:25:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.368 07:25:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.368 07:25:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.368 07:25:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.368 07:25:08 -- common/autotest_common.sh@10 -- # set +x 00:15:46.368 [2024-11-28 07:25:08.629944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:46.368 [2024-11-28 07:25:08.630050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.628 [2024-11-28 07:25:08.773650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.628 [2024-11-28 07:25:08.871426] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:46.628 [2024-11-28 07:25:08.871592] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.628 [2024-11-28 07:25:08.871609] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.628 [2024-11-28 07:25:08.871620] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.628 [2024-11-28 07:25:08.871840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.628 [2024-11-28 07:25:08.872106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.628 [2024-11-28 07:25:08.872749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.628 [2024-11-28 07:25:08.872801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.566 07:25:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.566 07:25:09 -- common/autotest_common.sh@862 -- # return 0 00:15:47.566 07:25:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:47.566 07:25:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:47.566 07:25:09 -- common/autotest_common.sh@10 -- # set +x 00:15:47.566 07:25:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.566 07:25:09 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:47.566 07:25:09 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:48.134 07:25:10 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:48.134 07:25:10 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:48.393 07:25:10 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:15:48.393 07:25:10 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:48.652 07:25:10 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:48.652 07:25:10 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:15:48.652 07:25:10 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:48.652 07:25:10 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:48.652 07:25:10 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:48.652 [2024-11-28 07:25:10.911831] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.911 07:25:10 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:49.171 07:25:11 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:49.171 07:25:11 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:49.436 07:25:11 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:49.436 07:25:11 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:49.436 07:25:11 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.747 [2024-11-28 07:25:11.930556] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.747 07:25:11 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:50.006 07:25:12 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:15:50.006 07:25:12 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:50.006 07:25:12 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:50.006 07:25:12 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:51.387 Initializing NVMe Controllers 00:15:51.387 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:15:51.387 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:15:51.387 Initialization complete. Launching workers. 00:15:51.387 ======================================================== 00:15:51.387 Latency(us) 00:15:51.387 Device Information : IOPS MiB/s Average min max 00:15:51.387 PCIE (0000:00:06.0) NSID 1 from core 0: 23423.23 91.50 1366.41 297.71 5135.45 00:15:51.387 ======================================================== 00:15:51.387 Total : 23423.23 91.50 1366.41 297.71 5135.45 00:15:51.387 00:15:51.387 07:25:13 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:52.765 Initializing NVMe Controllers 00:15:52.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:52.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:52.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:52.765 Initialization complete. Launching workers. 00:15:52.765 ======================================================== 00:15:52.765 Latency(us) 00:15:52.765 Device Information : IOPS MiB/s Average min max 00:15:52.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3265.98 12.76 305.87 112.35 7295.15 00:15:52.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8186.22 5050.42 12040.93 00:15:52.765 ======================================================== 00:15:52.765 Total : 3388.98 13.24 591.88 112.35 12040.93 00:15:52.765 00:15:52.765 07:25:14 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:54.149 Initializing NVMe Controllers 00:15:54.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:54.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:54.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:54.150 Initialization complete. Launching workers. 00:15:54.150 ======================================================== 00:15:54.150 Latency(us) 00:15:54.150 Device Information : IOPS MiB/s Average min max 00:15:54.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8491.53 33.17 3769.55 473.37 8374.07 00:15:54.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3974.31 15.52 8104.79 6946.35 16449.62 00:15:54.150 ======================================================== 00:15:54.150 Total : 12465.84 48.69 5151.70 473.37 16449.62 00:15:54.150 00:15:54.150 07:25:16 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:54.150 07:25:16 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:56.682 Initializing NVMe Controllers 00:15:56.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:56.682 Controller IO queue size 128, less than required. 00:15:56.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:56.682 Controller IO queue size 128, less than required. 00:15:56.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:56.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:56.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:56.682 Initialization complete. Launching workers. 00:15:56.682 ======================================================== 00:15:56.682 Latency(us) 00:15:56.682 Device Information : IOPS MiB/s Average min max 00:15:56.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1492.47 373.12 86440.27 42819.54 148904.50 00:15:56.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 632.49 158.12 206735.67 116850.60 327091.91 00:15:56.682 ======================================================== 00:15:56.682 Total : 2124.96 531.24 122245.84 42819.54 327091.91 00:15:56.682 00:15:56.682 07:25:18 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:56.682 No valid NVMe controllers or AIO or URING devices found 00:15:56.682 Initializing NVMe Controllers 00:15:56.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:56.682 Controller IO queue size 128, less than required. 00:15:56.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:56.682 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:56.682 Controller IO queue size 128, less than required. 00:15:56.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:56.682 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:56.682 WARNING: Some requested NVMe devices were skipped 00:15:56.682 07:25:18 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:59.218 Initializing NVMe Controllers 00:15:59.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:59.218 Controller IO queue size 128, less than required. 00:15:59.219 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:59.219 Controller IO queue size 128, less than required. 00:15:59.219 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:59.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:59.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:59.219 Initialization complete. Launching workers. 00:15:59.219 00:15:59.219 ==================== 00:15:59.219 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:59.219 TCP transport: 00:15:59.219 polls: 7501 00:15:59.219 idle_polls: 0 00:15:59.219 sock_completions: 7501 00:15:59.219 nvme_completions: 5715 00:15:59.219 submitted_requests: 8687 00:15:59.219 queued_requests: 1 00:15:59.219 00:15:59.219 ==================== 00:15:59.219 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:59.219 TCP transport: 00:15:59.219 polls: 7565 00:15:59.219 idle_polls: 0 00:15:59.219 sock_completions: 7565 00:15:59.219 nvme_completions: 6049 00:15:59.219 submitted_requests: 9235 00:15:59.219 queued_requests: 1 00:15:59.219 ======================================================== 00:15:59.219 Latency(us) 00:15:59.219 Device Information : IOPS MiB/s Average min max 00:15:59.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1492.21 373.05 88302.20 69409.66 145308.52 00:15:59.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1575.20 393.80 81266.24 37106.97 147435.27 00:15:59.219 ======================================================== 00:15:59.219 Total : 3067.41 766.85 84689.05 37106.97 147435.27 00:15:59.219 00:15:59.219 07:25:21 -- host/perf.sh@66 -- # sync 00:15:59.219 07:25:21 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:59.478 07:25:21 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:15:59.478 07:25:21 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:15:59.478 07:25:21 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:16:00.047 07:25:22 -- host/perf.sh@72 -- # ls_guid=3c78723d-05fd-40ce-8d0d-d2abba801437 00:16:00.047 07:25:22 -- host/perf.sh@73 -- # get_lvs_free_mb 3c78723d-05fd-40ce-8d0d-d2abba801437 00:16:00.047 07:25:22 -- common/autotest_common.sh@1353 -- # local lvs_uuid=3c78723d-05fd-40ce-8d0d-d2abba801437 00:16:00.047 07:25:22 -- common/autotest_common.sh@1354 -- # local lvs_info 00:16:00.047 07:25:22 -- common/autotest_common.sh@1355 -- # local fc 00:16:00.047 07:25:22 -- common/autotest_common.sh@1356 -- # local cs 00:16:00.047 07:25:22 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:00.047 07:25:22 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:16:00.047 { 00:16:00.047 "uuid": "3c78723d-05fd-40ce-8d0d-d2abba801437", 00:16:00.047 "name": "lvs_0", 00:16:00.047 "base_bdev": "Nvme0n1", 00:16:00.047 "total_data_clusters": 1278, 00:16:00.047 "free_clusters": 1278, 00:16:00.047 "block_size": 4096, 00:16:00.047 "cluster_size": 4194304 00:16:00.047 } 00:16:00.047 ]' 00:16:00.047 07:25:22 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="3c78723d-05fd-40ce-8d0d-d2abba801437") .free_clusters' 00:16:00.306 07:25:22 -- common/autotest_common.sh@1358 -- # fc=1278 00:16:00.306 07:25:22 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="3c78723d-05fd-40ce-8d0d-d2abba801437") .cluster_size' 00:16:00.306 5112 00:16:00.306 07:25:22 -- common/autotest_common.sh@1359 -- # cs=4194304 00:16:00.306 07:25:22 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:16:00.306 07:25:22 -- common/autotest_common.sh@1363 -- # echo 5112 00:16:00.306 07:25:22 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:16:00.306 07:25:22 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3c78723d-05fd-40ce-8d0d-d2abba801437 lbd_0 5112 00:16:00.565 07:25:22 -- host/perf.sh@80 -- # lb_guid=e732b45b-448b-4b4e-82e2-6c5d6b22217a 00:16:00.565 07:25:22 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore e732b45b-448b-4b4e-82e2-6c5d6b22217a lvs_n_0 00:16:00.825 07:25:23 -- host/perf.sh@83 -- # ls_nested_guid=900e0c6b-a547-44de-b832-f8390e33a0de 00:16:00.825 07:25:23 -- host/perf.sh@84 -- # get_lvs_free_mb 900e0c6b-a547-44de-b832-f8390e33a0de 00:16:00.825 07:25:23 -- common/autotest_common.sh@1353 -- # local lvs_uuid=900e0c6b-a547-44de-b832-f8390e33a0de 00:16:00.825 07:25:23 -- common/autotest_common.sh@1354 -- # local lvs_info 00:16:00.825 07:25:23 -- common/autotest_common.sh@1355 -- # local fc 00:16:00.825 07:25:23 -- common/autotest_common.sh@1356 -- # local cs 00:16:00.825 07:25:23 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:01.089 07:25:23 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:16:01.089 { 00:16:01.089 "uuid": "3c78723d-05fd-40ce-8d0d-d2abba801437", 00:16:01.089 "name": "lvs_0", 00:16:01.089 "base_bdev": "Nvme0n1", 00:16:01.089 "total_data_clusters": 1278, 00:16:01.089 "free_clusters": 0, 00:16:01.089 "block_size": 4096, 00:16:01.089 "cluster_size": 4194304 00:16:01.089 }, 00:16:01.089 { 00:16:01.089 "uuid": "900e0c6b-a547-44de-b832-f8390e33a0de", 00:16:01.089 "name": "lvs_n_0", 00:16:01.089 "base_bdev": "e732b45b-448b-4b4e-82e2-6c5d6b22217a", 00:16:01.089 "total_data_clusters": 1276, 00:16:01.089 "free_clusters": 1276, 00:16:01.089 "block_size": 4096, 00:16:01.089 "cluster_size": 4194304 00:16:01.089 } 00:16:01.089 ]' 00:16:01.089 07:25:23 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="900e0c6b-a547-44de-b832-f8390e33a0de") .free_clusters' 00:16:01.089 07:25:23 -- common/autotest_common.sh@1358 -- # fc=1276 00:16:01.089 07:25:23 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="900e0c6b-a547-44de-b832-f8390e33a0de") .cluster_size' 00:16:01.089 5104 00:16:01.089 07:25:23 -- common/autotest_common.sh@1359 -- # cs=4194304 00:16:01.089 07:25:23 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:16:01.089 07:25:23 -- common/autotest_common.sh@1363 -- # echo 5104 00:16:01.089 07:25:23 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:16:01.089 07:25:23 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 900e0c6b-a547-44de-b832-f8390e33a0de lbd_nest_0 5104 00:16:01.658 07:25:23 -- host/perf.sh@88 -- # lb_nested_guid=5e10824e-2892-48b1-864d-dc6f176ad59e 00:16:01.658 07:25:23 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:01.658 07:25:23 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:16:01.658 07:25:23 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 5e10824e-2892-48b1-864d-dc6f176ad59e 00:16:01.917 07:25:24 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:02.176 07:25:24 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:16:02.176 07:25:24 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:16:02.176 07:25:24 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:16:02.176 07:25:24 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:02.176 07:25:24 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:02.745 No valid NVMe controllers or AIO or URING devices found 00:16:02.745 Initializing NVMe Controllers 00:16:02.745 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:02.745 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:16:02.745 WARNING: Some requested NVMe devices were skipped 00:16:02.745 07:25:24 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:02.745 07:25:24 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:12.838 Initializing NVMe Controllers 00:16:12.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:12.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:12.838 Initialization complete. Launching workers. 00:16:12.838 ======================================================== 00:16:12.838 Latency(us) 00:16:12.838 Device Information : IOPS MiB/s Average min max 00:16:12.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 863.54 107.94 1157.72 378.21 9270.10 00:16:12.838 ======================================================== 00:16:12.838 Total : 863.54 107.94 1157.72 378.21 9270.10 00:16:12.838 00:16:12.838 07:25:35 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:16:12.838 07:25:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:12.838 07:25:35 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:13.096 No valid NVMe controllers or AIO or URING devices found 00:16:13.096 Initializing NVMe Controllers 00:16:13.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:13.096 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:16:13.096 WARNING: Some requested NVMe devices were skipped 00:16:13.096 07:25:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:13.096 07:25:35 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:25.307 Initializing NVMe Controllers 00:16:25.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:25.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:25.307 Initialization complete. Launching workers. 00:16:25.307 ======================================================== 00:16:25.307 Latency(us) 00:16:25.307 Device Information : IOPS MiB/s Average min max 00:16:25.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1341.18 167.65 23888.07 5787.74 63533.90 00:16:25.307 ======================================================== 00:16:25.307 Total : 1341.18 167.65 23888.07 5787.74 63533.90 00:16:25.307 00:16:25.307 07:25:45 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:16:25.307 07:25:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:25.307 07:25:45 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:25.307 No valid NVMe controllers or AIO or URING devices found 00:16:25.307 Initializing NVMe Controllers 00:16:25.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:25.307 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:16:25.307 WARNING: Some requested NVMe devices were skipped 00:16:25.308 07:25:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:25.308 07:25:45 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:35.289 Initializing NVMe Controllers 00:16:35.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:35.289 Controller IO queue size 128, less than required. 00:16:35.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:35.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:35.289 Initialization complete. Launching workers. 00:16:35.289 ======================================================== 00:16:35.289 Latency(us) 00:16:35.289 Device Information : IOPS MiB/s Average min max 00:16:35.289 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3602.73 450.34 35604.94 7250.89 77078.07 00:16:35.289 ======================================================== 00:16:35.289 Total : 3602.73 450.34 35604.94 7250.89 77078.07 00:16:35.289 00:16:35.289 07:25:56 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.289 07:25:56 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5e10824e-2892-48b1-864d-dc6f176ad59e 00:16:35.289 07:25:56 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:16:35.289 07:25:57 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e732b45b-448b-4b4e-82e2-6c5d6b22217a 00:16:35.548 07:25:57 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:16:35.807 07:25:57 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:35.807 07:25:57 -- host/perf.sh@114 -- # nvmftestfini 00:16:35.807 07:25:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:35.807 07:25:57 -- nvmf/common.sh@116 -- # sync 00:16:35.807 07:25:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:35.807 07:25:57 -- nvmf/common.sh@119 -- # set +e 00:16:35.807 07:25:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:35.807 07:25:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:35.807 rmmod nvme_tcp 00:16:35.807 rmmod nvme_fabrics 00:16:35.807 rmmod nvme_keyring 00:16:35.807 07:25:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:35.807 07:25:57 -- nvmf/common.sh@123 -- # set -e 00:16:35.807 07:25:57 -- nvmf/common.sh@124 -- # return 0 00:16:35.807 07:25:57 -- nvmf/common.sh@477 -- # '[' -n 81090 ']' 00:16:35.807 07:25:57 -- nvmf/common.sh@478 -- # killprocess 81090 00:16:35.807 07:25:57 -- common/autotest_common.sh@936 -- # '[' -z 81090 ']' 00:16:35.807 07:25:57 -- common/autotest_common.sh@940 -- # kill -0 81090 00:16:35.807 07:25:57 -- common/autotest_common.sh@941 -- # uname 00:16:35.807 07:25:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:35.807 07:25:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81090 00:16:35.807 killing process with pid 81090 00:16:35.807 07:25:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:35.807 07:25:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:35.807 07:25:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81090' 00:16:35.807 07:25:58 -- common/autotest_common.sh@955 -- # kill 81090 00:16:35.807 07:25:58 -- common/autotest_common.sh@960 -- # wait 81090 00:16:36.745 07:25:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:36.745 07:25:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:36.745 07:25:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:36.745 07:25:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.745 07:25:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:36.745 07:25:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.745 07:25:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.745 07:25:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.745 07:25:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:36.745 00:16:36.745 real 0m50.750s 00:16:36.745 user 3m10.381s 00:16:36.745 sys 0m13.532s 00:16:36.745 07:25:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:36.745 07:25:58 -- common/autotest_common.sh@10 -- # set +x 00:16:36.745 ************************************ 00:16:36.745 END TEST nvmf_perf 00:16:36.745 ************************************ 00:16:36.745 07:25:58 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:36.745 07:25:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:36.745 07:25:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:36.745 07:25:58 -- common/autotest_common.sh@10 -- # set +x 00:16:36.745 ************************************ 00:16:36.745 START TEST nvmf_fio_host 00:16:36.745 ************************************ 00:16:36.745 07:25:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:36.745 * Looking for test storage... 00:16:36.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:36.745 07:25:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:36.745 07:25:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:36.745 07:25:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:36.745 07:25:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:36.745 07:25:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:36.745 07:25:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:36.745 07:25:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:36.745 07:25:58 -- scripts/common.sh@335 -- # IFS=.-: 00:16:36.745 07:25:58 -- scripts/common.sh@335 -- # read -ra ver1 00:16:36.745 07:25:58 -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.745 07:25:58 -- scripts/common.sh@336 -- # read -ra ver2 00:16:36.745 07:25:58 -- scripts/common.sh@337 -- # local 'op=<' 00:16:36.745 07:25:58 -- scripts/common.sh@339 -- # ver1_l=2 00:16:36.745 07:25:58 -- scripts/common.sh@340 -- # ver2_l=1 00:16:36.745 07:25:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:36.745 07:25:58 -- scripts/common.sh@343 -- # case "$op" in 00:16:36.745 07:25:58 -- scripts/common.sh@344 -- # : 1 00:16:36.745 07:25:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:36.745 07:25:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.745 07:25:58 -- scripts/common.sh@364 -- # decimal 1 00:16:36.745 07:25:58 -- scripts/common.sh@352 -- # local d=1 00:16:36.745 07:25:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.745 07:25:58 -- scripts/common.sh@354 -- # echo 1 00:16:36.745 07:25:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:36.745 07:25:58 -- scripts/common.sh@365 -- # decimal 2 00:16:36.745 07:25:58 -- scripts/common.sh@352 -- # local d=2 00:16:36.745 07:25:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.745 07:25:58 -- scripts/common.sh@354 -- # echo 2 00:16:36.745 07:25:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:36.745 07:25:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:36.745 07:25:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:36.745 07:25:58 -- scripts/common.sh@367 -- # return 0 00:16:36.745 07:25:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.745 07:25:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:36.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.745 --rc genhtml_branch_coverage=1 00:16:36.745 --rc genhtml_function_coverage=1 00:16:36.745 --rc genhtml_legend=1 00:16:36.745 --rc geninfo_all_blocks=1 00:16:36.745 --rc geninfo_unexecuted_blocks=1 00:16:36.745 00:16:36.745 ' 00:16:36.745 07:25:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:36.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.745 --rc genhtml_branch_coverage=1 00:16:36.745 --rc genhtml_function_coverage=1 00:16:36.745 --rc genhtml_legend=1 00:16:36.745 --rc geninfo_all_blocks=1 00:16:36.745 --rc geninfo_unexecuted_blocks=1 00:16:36.745 00:16:36.745 ' 00:16:36.745 07:25:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:36.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.745 --rc genhtml_branch_coverage=1 00:16:36.745 --rc genhtml_function_coverage=1 00:16:36.745 --rc genhtml_legend=1 00:16:36.745 --rc geninfo_all_blocks=1 00:16:36.745 --rc geninfo_unexecuted_blocks=1 00:16:36.745 00:16:36.745 ' 00:16:36.745 07:25:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:36.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.745 --rc genhtml_branch_coverage=1 00:16:36.745 --rc genhtml_function_coverage=1 00:16:36.745 --rc genhtml_legend=1 00:16:36.745 --rc geninfo_all_blocks=1 00:16:36.745 --rc geninfo_unexecuted_blocks=1 00:16:36.745 00:16:36.745 ' 00:16:36.745 07:25:58 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.745 07:25:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.745 07:25:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.745 07:25:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.745 07:25:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.745 07:25:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.746 07:25:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.746 07:25:58 -- paths/export.sh@5 -- # export PATH 00:16:36.746 07:25:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.746 07:25:58 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.746 07:25:58 -- nvmf/common.sh@7 -- # uname -s 00:16:36.746 07:25:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.746 07:25:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.746 07:25:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.746 07:25:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.746 07:25:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.746 07:25:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.746 07:25:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.746 07:25:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.746 07:25:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.746 07:25:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.746 07:25:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:16:36.746 07:25:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:16:36.746 07:25:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.746 07:25:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.746 07:25:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.746 07:25:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.746 07:25:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.746 07:25:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.746 07:25:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.746 07:25:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.746 07:25:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.746 07:25:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.746 07:25:58 -- paths/export.sh@5 -- # export PATH 00:16:36.746 07:25:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.746 07:25:58 -- nvmf/common.sh@46 -- # : 0 00:16:36.746 07:25:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:36.746 07:25:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:36.746 07:25:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:36.746 07:25:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.746 07:25:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.746 07:25:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:36.746 07:25:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:36.746 07:25:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:36.746 07:25:58 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:36.746 07:25:58 -- host/fio.sh@14 -- # nvmftestinit 00:16:36.746 07:25:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:36.746 07:25:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.746 07:25:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:36.746 07:25:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:36.746 07:25:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:36.746 07:25:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.746 07:25:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.746 07:25:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.746 07:25:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:36.746 07:25:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:36.746 07:25:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:36.746 07:25:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:36.746 07:25:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:36.746 07:25:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:36.746 07:25:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.746 07:25:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.746 07:25:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:36.746 07:25:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:36.746 07:25:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.746 07:25:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.746 07:25:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.746 07:25:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.746 07:25:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.746 07:25:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.746 07:25:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.746 07:25:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.746 07:25:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:36.746 07:25:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:36.746 Cannot find device "nvmf_tgt_br" 00:16:36.746 07:25:59 -- nvmf/common.sh@154 -- # true 00:16:36.746 07:25:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.746 Cannot find device "nvmf_tgt_br2" 00:16:36.746 07:25:59 -- nvmf/common.sh@155 -- # true 00:16:36.746 07:25:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:37.006 07:25:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:37.006 Cannot find device "nvmf_tgt_br" 00:16:37.006 07:25:59 -- nvmf/common.sh@157 -- # true 00:16:37.006 07:25:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:37.006 Cannot find device "nvmf_tgt_br2" 00:16:37.006 07:25:59 -- nvmf/common.sh@158 -- # true 00:16:37.006 07:25:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:37.006 07:25:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:37.006 07:25:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:37.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.006 07:25:59 -- nvmf/common.sh@161 -- # true 00:16:37.006 07:25:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:37.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.006 07:25:59 -- nvmf/common.sh@162 -- # true 00:16:37.006 07:25:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:37.006 07:25:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:37.006 07:25:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:37.006 07:25:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:37.006 07:25:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:37.006 07:25:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:37.006 07:25:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:37.006 07:25:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:37.006 07:25:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:37.006 07:25:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:37.006 07:25:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:37.006 07:25:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:37.006 07:25:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:37.006 07:25:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:37.006 07:25:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:37.006 07:25:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:37.006 07:25:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:37.006 07:25:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:37.006 07:25:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:37.006 07:25:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:37.006 07:25:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:37.006 07:25:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:37.265 07:25:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:37.265 07:25:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:37.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:16:37.265 00:16:37.265 --- 10.0.0.2 ping statistics --- 00:16:37.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.265 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:37.265 07:25:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:37.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:37.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:37.265 00:16:37.265 --- 10.0.0.3 ping statistics --- 00:16:37.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.265 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:37.265 07:25:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:37.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:16:37.265 00:16:37.265 --- 10.0.0.1 ping statistics --- 00:16:37.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.265 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:37.265 07:25:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.265 07:25:59 -- nvmf/common.sh@421 -- # return 0 00:16:37.265 07:25:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:37.265 07:25:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.265 07:25:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:37.265 07:25:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:37.265 07:25:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.265 07:25:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:37.265 07:25:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:37.265 07:25:59 -- host/fio.sh@16 -- # [[ y != y ]] 00:16:37.265 07:25:59 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:37.265 07:25:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:37.265 07:25:59 -- common/autotest_common.sh@10 -- # set +x 00:16:37.265 07:25:59 -- host/fio.sh@24 -- # nvmfpid=81924 00:16:37.265 07:25:59 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:37.265 07:25:59 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:37.265 07:25:59 -- host/fio.sh@28 -- # waitforlisten 81924 00:16:37.265 07:25:59 -- common/autotest_common.sh@829 -- # '[' -z 81924 ']' 00:16:37.265 07:25:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.265 07:25:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.265 07:25:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.265 07:25:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.265 07:25:59 -- common/autotest_common.sh@10 -- # set +x 00:16:37.265 [2024-11-28 07:25:59.390730] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:37.265 [2024-11-28 07:25:59.390842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.265 [2024-11-28 07:25:59.531146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:37.525 [2024-11-28 07:25:59.597011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:37.525 [2024-11-28 07:25:59.597154] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.525 [2024-11-28 07:25:59.597168] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.525 [2024-11-28 07:25:59.597176] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.525 [2024-11-28 07:25:59.597322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.525 [2024-11-28 07:25:59.597705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.525 [2024-11-28 07:25:59.598151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.525 [2024-11-28 07:25:59.598157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.094 07:26:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.094 07:26:00 -- common/autotest_common.sh@862 -- # return 0 00:16:38.094 07:26:00 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:38.352 [2024-11-28 07:26:00.598543] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.611 07:26:00 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:38.611 07:26:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:38.611 07:26:00 -- common/autotest_common.sh@10 -- # set +x 00:16:38.611 07:26:00 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:38.886 Malloc1 00:16:38.886 07:26:01 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:39.179 07:26:01 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:39.438 07:26:01 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.438 [2024-11-28 07:26:01.695288] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.697 07:26:01 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:39.955 07:26:02 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:39.955 07:26:02 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:39.955 07:26:02 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:39.955 07:26:02 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:39.955 07:26:02 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:39.955 07:26:02 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:39.955 07:26:02 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:39.955 07:26:02 -- common/autotest_common.sh@1330 -- # shift 00:16:39.955 07:26:02 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:39.955 07:26:02 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:39.955 07:26:02 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:39.955 07:26:02 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:39.955 07:26:02 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:39.955 07:26:02 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:39.955 07:26:02 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:39.955 07:26:02 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:39.955 07:26:02 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:39.955 07:26:02 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:39.955 07:26:02 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:39.955 07:26:02 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:39.955 07:26:02 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:39.955 07:26:02 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:39.955 07:26:02 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:39.955 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:39.955 fio-3.35 00:16:39.955 Starting 1 thread 00:16:42.491 00:16:42.491 test: (groupid=0, jobs=1): err= 0: pid=82007: Thu Nov 28 07:26:04 2024 00:16:42.491 read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(78.9MiB/2006msec) 00:16:42.491 slat (nsec): min=1624, max=275439, avg=2474.61, stdev=3122.22 00:16:42.491 clat (usec): min=2007, max=11264, avg=6605.75, stdev=497.86 00:16:42.491 lat (usec): min=2035, max=11266, avg=6608.22, stdev=497.72 00:16:42.491 clat percentiles (usec): 00:16:42.491 | 1.00th=[ 5538], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6259], 00:16:42.491 | 30.00th=[ 6390], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:16:42.491 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:16:42.491 | 99.00th=[ 7898], 99.50th=[ 8094], 99.90th=[ 9634], 99.95th=[10290], 00:16:42.491 | 99.99th=[10945] 00:16:42.491 bw ( KiB/s): min=39800, max=40440, per=99.94%, avg=40242.00, stdev=298.29, samples=4 00:16:42.491 iops : min= 9950, max=10110, avg=10060.50, stdev=74.57, samples=4 00:16:42.491 write: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(78.9MiB/2006msec); 0 zone resets 00:16:42.491 slat (nsec): min=1702, max=208667, avg=2558.54, stdev=2545.83 00:16:42.491 clat (usec): min=1900, max=10712, avg=6054.79, stdev=462.93 00:16:42.491 lat (usec): min=1910, max=10714, avg=6057.35, stdev=462.91 00:16:42.491 clat percentiles (usec): 00:16:42.491 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 00:16:42.491 | 30.00th=[ 5866], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6128], 00:16:42.491 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6783], 00:16:42.491 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[ 8979], 99.95th=[10290], 00:16:42.491 | 99.99th=[10683] 00:16:42.491 bw ( KiB/s): min=39744, max=40704, per=100.00%, avg=40290.00, stdev=400.34, samples=4 00:16:42.491 iops : min= 9936, max=10176, avg=10072.50, stdev=100.08, samples=4 00:16:42.491 lat (msec) : 2=0.01%, 4=0.14%, 10=99.79%, 20=0.07% 00:16:42.491 cpu : usr=66.63%, sys=23.99%, ctx=7, majf=0, minf=5 00:16:42.491 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:42.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:42.491 issued rwts: total=20194,20199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.491 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:42.491 00:16:42.491 Run status group 0 (all jobs): 00:16:42.491 READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=78.9MiB (82.7MB), run=2006-2006msec 00:16:42.491 WRITE: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=78.9MiB (82.7MB), run=2006-2006msec 00:16:42.491 07:26:04 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:42.491 07:26:04 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:42.491 07:26:04 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:42.491 07:26:04 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:42.491 07:26:04 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:42.491 07:26:04 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:42.491 07:26:04 -- common/autotest_common.sh@1330 -- # shift 00:16:42.491 07:26:04 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:42.491 07:26:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:42.492 07:26:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:42.492 07:26:04 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:42.492 07:26:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:42.492 07:26:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:42.492 07:26:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:42.492 07:26:04 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:42.492 07:26:04 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:42.492 07:26:04 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:42.492 07:26:04 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:42.492 07:26:04 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:42.492 07:26:04 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:42.492 07:26:04 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:42.492 07:26:04 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:42.492 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:42.492 fio-3.35 00:16:42.492 Starting 1 thread 00:16:45.027 00:16:45.027 test: (groupid=0, jobs=1): err= 0: pid=82050: Thu Nov 28 07:26:06 2024 00:16:45.027 read: IOPS=8692, BW=136MiB/s (142MB/s)(272MiB/2001msec) 00:16:45.027 slat (usec): min=2, max=120, avg= 3.91, stdev= 2.49 00:16:45.027 clat (usec): min=181, max=15690, avg=8057.75, stdev=2424.98 00:16:45.027 lat (usec): min=192, max=15692, avg=8061.67, stdev=2425.17 00:16:45.027 clat percentiles (usec): 00:16:45.027 | 1.00th=[ 4047], 5.00th=[ 4752], 10.00th=[ 5211], 20.00th=[ 5866], 00:16:45.027 | 30.00th=[ 6456], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 8455], 00:16:45.027 | 70.00th=[ 9110], 80.00th=[10159], 90.00th=[11338], 95.00th=[12649], 00:16:45.027 | 99.00th=[14484], 99.50th=[14746], 99.90th=[15270], 99.95th=[15401], 00:16:45.027 | 99.99th=[15664] 00:16:45.027 bw ( KiB/s): min=62016, max=70898, per=48.73%, avg=67771.33, stdev=4990.48, samples=3 00:16:45.027 iops : min= 3876, max= 4431, avg=4235.67, stdev=311.87, samples=3 00:16:45.027 write: IOPS=4819, BW=75.3MiB/s (79.0MB/s)(138MiB/1838msec); 0 zone resets 00:16:45.027 slat (usec): min=28, max=366, avg=38.37, stdev= 9.65 00:16:45.027 clat (usec): min=3708, max=19021, avg=11929.81, stdev=1873.20 00:16:45.027 lat (usec): min=3737, max=19068, avg=11968.18, stdev=1874.75 00:16:45.027 clat percentiles (usec): 00:16:45.027 | 1.00th=[ 8094], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10421], 00:16:45.027 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11731], 60.00th=[12125], 00:16:45.027 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14484], 95.00th=[15533], 00:16:45.027 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:16:45.027 | 99.99th=[19006] 00:16:45.027 bw ( KiB/s): min=65184, max=73229, per=91.39%, avg=70479.00, stdev=4586.75, samples=3 00:16:45.027 iops : min= 4074, max= 4576, avg=4404.67, stdev=286.43, samples=3 00:16:45.027 lat (usec) : 250=0.01% 00:16:45.027 lat (msec) : 2=0.02%, 4=0.55%, 10=55.77%, 20=43.65% 00:16:45.027 cpu : usr=75.01%, sys=17.94%, ctx=21, majf=0, minf=1 00:16:45.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:45.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:45.027 issued rwts: total=17393,8859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:45.027 00:16:45.027 Run status group 0 (all jobs): 00:16:45.027 READ: bw=136MiB/s (142MB/s), 136MiB/s-136MiB/s (142MB/s-142MB/s), io=272MiB (285MB), run=2001-2001msec 00:16:45.027 WRITE: bw=75.3MiB/s (79.0MB/s), 75.3MiB/s-75.3MiB/s (79.0MB/s-79.0MB/s), io=138MiB (145MB), run=1838-1838msec 00:16:45.027 07:26:07 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.027 07:26:07 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:16:45.027 07:26:07 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:16:45.027 07:26:07 -- host/fio.sh@51 -- # get_nvme_bdfs 00:16:45.027 07:26:07 -- common/autotest_common.sh@1508 -- # bdfs=() 00:16:45.027 07:26:07 -- common/autotest_common.sh@1508 -- # local bdfs 00:16:45.027 07:26:07 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:45.027 07:26:07 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:45.027 07:26:07 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:16:45.286 07:26:07 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:16:45.286 07:26:07 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:16:45.286 07:26:07 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:16:45.544 Nvme0n1 00:16:45.544 07:26:07 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:16:45.803 07:26:07 -- host/fio.sh@53 -- # ls_guid=48d5d58c-2ef1-4907-82b2-d949c3c5530a 00:16:45.803 07:26:07 -- host/fio.sh@54 -- # get_lvs_free_mb 48d5d58c-2ef1-4907-82b2-d949c3c5530a 00:16:45.803 07:26:07 -- common/autotest_common.sh@1353 -- # local lvs_uuid=48d5d58c-2ef1-4907-82b2-d949c3c5530a 00:16:45.803 07:26:07 -- common/autotest_common.sh@1354 -- # local lvs_info 00:16:45.803 07:26:07 -- common/autotest_common.sh@1355 -- # local fc 00:16:45.803 07:26:07 -- common/autotest_common.sh@1356 -- # local cs 00:16:45.803 07:26:07 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:46.061 07:26:08 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:16:46.061 { 00:16:46.061 "uuid": "48d5d58c-2ef1-4907-82b2-d949c3c5530a", 00:16:46.061 "name": "lvs_0", 00:16:46.061 "base_bdev": "Nvme0n1", 00:16:46.061 "total_data_clusters": 4, 00:16:46.061 "free_clusters": 4, 00:16:46.061 "block_size": 4096, 00:16:46.061 "cluster_size": 1073741824 00:16:46.061 } 00:16:46.061 ]' 00:16:46.061 07:26:08 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="48d5d58c-2ef1-4907-82b2-d949c3c5530a") .free_clusters' 00:16:46.061 07:26:08 -- common/autotest_common.sh@1358 -- # fc=4 00:16:46.061 07:26:08 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="48d5d58c-2ef1-4907-82b2-d949c3c5530a") .cluster_size' 00:16:46.061 4096 00:16:46.061 07:26:08 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:16:46.061 07:26:08 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:16:46.061 07:26:08 -- common/autotest_common.sh@1363 -- # echo 4096 00:16:46.061 07:26:08 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:16:46.319 8adc29f0-9a35-416f-9a49-64fa0e9627fe 00:16:46.319 07:26:08 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:16:46.578 07:26:08 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:16:46.835 07:26:08 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:47.094 07:26:09 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:47.094 07:26:09 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:47.094 07:26:09 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:47.094 07:26:09 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:47.094 07:26:09 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:47.094 07:26:09 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:47.094 07:26:09 -- common/autotest_common.sh@1330 -- # shift 00:16:47.094 07:26:09 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:47.094 07:26:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:47.094 07:26:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:47.094 07:26:09 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:47.094 07:26:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:47.094 07:26:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:47.094 07:26:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:47.094 07:26:09 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:47.094 07:26:09 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:47.094 07:26:09 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:47.094 07:26:09 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:47.095 07:26:09 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:47.095 07:26:09 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:47.095 07:26:09 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:47.095 07:26:09 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:47.095 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:47.095 fio-3.35 00:16:47.095 Starting 1 thread 00:16:49.633 00:16:49.633 test: (groupid=0, jobs=1): err= 0: pid=82163: Thu Nov 28 07:26:11 2024 00:16:49.633 read: IOPS=6980, BW=27.3MiB/s (28.6MB/s)(54.8MiB/2009msec) 00:16:49.633 slat (nsec): min=1815, max=333587, avg=2536.52, stdev=4169.73 00:16:49.633 clat (usec): min=2902, max=17496, avg=9577.06, stdev=927.55 00:16:49.633 lat (usec): min=2912, max=17498, avg=9579.59, stdev=927.34 00:16:49.633 clat percentiles (usec): 00:16:49.633 | 1.00th=[ 7767], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8848], 00:16:49.633 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:16:49.633 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:16:49.633 | 99.00th=[11863], 99.50th=[12125], 99.90th=[15401], 99.95th=[16450], 00:16:49.633 | 99.99th=[17433] 00:16:49.633 bw ( KiB/s): min=25912, max=29360, per=99.80%, avg=27865.00, stdev=1477.83, samples=4 00:16:49.633 iops : min= 6478, max= 7340, avg=6966.75, stdev=369.36, samples=4 00:16:49.633 write: IOPS=6985, BW=27.3MiB/s (28.6MB/s)(54.8MiB/2009msec); 0 zone resets 00:16:49.633 slat (nsec): min=1884, max=281303, avg=2613.97, stdev=3220.48 00:16:49.633 clat (usec): min=2495, max=18150, avg=8688.68, stdev=880.24 00:16:49.633 lat (usec): min=2509, max=18163, avg=8691.30, stdev=880.14 00:16:49.633 clat percentiles (usec): 00:16:49.633 | 1.00th=[ 6980], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 8029], 00:16:49.633 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:16:49.633 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10028], 00:16:49.633 | 99.00th=[10814], 99.50th=[11207], 99.90th=[15401], 99.95th=[16188], 00:16:49.633 | 99.99th=[18220] 00:16:49.633 bw ( KiB/s): min=26216, max=28744, per=99.87%, avg=27905.75, stdev=1166.52, samples=4 00:16:49.633 iops : min= 6554, max= 7186, avg=6976.25, stdev=291.60, samples=4 00:16:49.633 lat (msec) : 4=0.07%, 10=81.88%, 20=18.05% 00:16:49.633 cpu : usr=71.26%, sys=22.71%, ctx=3, majf=0, minf=14 00:16:49.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:49.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:49.633 issued rwts: total=14023,14033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:49.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:49.633 00:16:49.633 Run status group 0 (all jobs): 00:16:49.633 READ: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=54.8MiB (57.4MB), run=2009-2009msec 00:16:49.633 WRITE: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=54.8MiB (57.5MB), run=2009-2009msec 00:16:49.633 07:26:11 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:49.633 07:26:11 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:16:49.892 07:26:12 -- host/fio.sh@64 -- # ls_nested_guid=af95cfa2-b7d0-4316-a07d-059835162fa5 00:16:49.892 07:26:12 -- host/fio.sh@65 -- # get_lvs_free_mb af95cfa2-b7d0-4316-a07d-059835162fa5 00:16:49.892 07:26:12 -- common/autotest_common.sh@1353 -- # local lvs_uuid=af95cfa2-b7d0-4316-a07d-059835162fa5 00:16:49.892 07:26:12 -- common/autotest_common.sh@1354 -- # local lvs_info 00:16:49.892 07:26:12 -- common/autotest_common.sh@1355 -- # local fc 00:16:49.892 07:26:12 -- common/autotest_common.sh@1356 -- # local cs 00:16:49.892 07:26:12 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:50.461 07:26:12 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:16:50.461 { 00:16:50.461 "uuid": "48d5d58c-2ef1-4907-82b2-d949c3c5530a", 00:16:50.461 "name": "lvs_0", 00:16:50.461 "base_bdev": "Nvme0n1", 00:16:50.461 "total_data_clusters": 4, 00:16:50.461 "free_clusters": 0, 00:16:50.461 "block_size": 4096, 00:16:50.461 "cluster_size": 1073741824 00:16:50.461 }, 00:16:50.461 { 00:16:50.461 "uuid": "af95cfa2-b7d0-4316-a07d-059835162fa5", 00:16:50.461 "name": "lvs_n_0", 00:16:50.461 "base_bdev": "8adc29f0-9a35-416f-9a49-64fa0e9627fe", 00:16:50.461 "total_data_clusters": 1022, 00:16:50.461 "free_clusters": 1022, 00:16:50.461 "block_size": 4096, 00:16:50.461 "cluster_size": 4194304 00:16:50.461 } 00:16:50.461 ]' 00:16:50.461 07:26:12 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="af95cfa2-b7d0-4316-a07d-059835162fa5") .free_clusters' 00:16:50.461 07:26:12 -- common/autotest_common.sh@1358 -- # fc=1022 00:16:50.461 07:26:12 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="af95cfa2-b7d0-4316-a07d-059835162fa5") .cluster_size' 00:16:50.461 4088 00:16:50.461 07:26:12 -- common/autotest_common.sh@1359 -- # cs=4194304 00:16:50.461 07:26:12 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:16:50.461 07:26:12 -- common/autotest_common.sh@1363 -- # echo 4088 00:16:50.461 07:26:12 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:16:50.720 b18a0fba-701d-415e-a679-df6e319c3433 00:16:50.720 07:26:12 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:16:50.980 07:26:13 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:16:50.980 07:26:13 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:51.239 07:26:13 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:51.239 07:26:13 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:51.239 07:26:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:51.239 07:26:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:51.239 07:26:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:51.239 07:26:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:51.239 07:26:13 -- common/autotest_common.sh@1330 -- # shift 00:16:51.239 07:26:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:51.239 07:26:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:51.239 07:26:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:51.239 07:26:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:51.239 07:26:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:51.239 07:26:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:51.239 07:26:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:51.239 07:26:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:51.239 07:26:13 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:51.239 07:26:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:51.239 07:26:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:51.239 07:26:13 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:51.239 07:26:13 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:51.239 07:26:13 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:51.239 07:26:13 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:51.498 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:51.498 fio-3.35 00:16:51.498 Starting 1 thread 00:16:54.032 00:16:54.032 test: (groupid=0, jobs=1): err= 0: pid=82241: Thu Nov 28 07:26:15 2024 00:16:54.032 read: IOPS=5890, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2009msec) 00:16:54.032 slat (nsec): min=1776, max=337069, avg=2979.18, stdev=5021.89 00:16:54.032 clat (usec): min=3232, max=21406, avg=11356.06, stdev=1031.51 00:16:54.032 lat (usec): min=3242, max=21409, avg=11359.04, stdev=1031.15 00:16:54.032 clat percentiles (usec): 00:16:54.032 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:16:54.032 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:16:54.032 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:16:54.032 | 99.00th=[13698], 99.50th=[13960], 99.90th=[18744], 99.95th=[20055], 00:16:54.032 | 99.99th=[21365] 00:16:54.032 bw ( KiB/s): min=23232, max=24208, per=99.92%, avg=23544.00, stdev=459.33, samples=4 00:16:54.032 iops : min= 5808, max= 6052, avg=5886.00, stdev=114.83, samples=4 00:16:54.032 write: IOPS=5886, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2009msec); 0 zone resets 00:16:54.032 slat (nsec): min=1838, max=242187, avg=3035.99, stdev=3706.38 00:16:54.032 clat (usec): min=2638, max=18859, avg=10303.00, stdev=955.86 00:16:54.032 lat (usec): min=2651, max=18873, avg=10306.04, stdev=955.64 00:16:54.032 clat percentiles (usec): 00:16:54.032 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:16:54.032 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:16:54.032 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:16:54.032 | 99.00th=[12518], 99.50th=[12911], 99.90th=[17433], 99.95th=[18744], 00:16:54.032 | 99.99th=[18744] 00:16:54.032 bw ( KiB/s): min=22976, max=24136, per=99.91%, avg=23522.00, stdev=576.61, samples=4 00:16:54.032 iops : min= 5744, max= 6034, avg=5880.50, stdev=144.15, samples=4 00:16:54.032 lat (msec) : 4=0.05%, 10=22.36%, 20=77.55%, 50=0.03% 00:16:54.032 cpu : usr=70.32%, sys=22.91%, ctx=5, majf=0, minf=14 00:16:54.032 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:16:54.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.032 issued rwts: total=11835,11825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.032 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.032 00:16:54.032 Run status group 0 (all jobs): 00:16:54.032 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.5MB), run=2009-2009msec 00:16:54.032 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.4MB), run=2009-2009msec 00:16:54.032 07:26:15 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:54.032 07:26:16 -- host/fio.sh@74 -- # sync 00:16:54.032 07:26:16 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:16:54.292 07:26:16 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:16:54.551 07:26:16 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:16:54.811 07:26:16 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:16:55.070 07:26:17 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:56.008 07:26:18 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:56.008 07:26:18 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:56.008 07:26:18 -- host/fio.sh@86 -- # nvmftestfini 00:16:56.008 07:26:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:56.008 07:26:18 -- nvmf/common.sh@116 -- # sync 00:16:56.008 07:26:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:56.008 07:26:18 -- nvmf/common.sh@119 -- # set +e 00:16:56.008 07:26:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:56.008 07:26:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:56.008 rmmod nvme_tcp 00:16:56.008 rmmod nvme_fabrics 00:16:56.008 rmmod nvme_keyring 00:16:56.008 07:26:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:56.008 07:26:18 -- nvmf/common.sh@123 -- # set -e 00:16:56.008 07:26:18 -- nvmf/common.sh@124 -- # return 0 00:16:56.008 07:26:18 -- nvmf/common.sh@477 -- # '[' -n 81924 ']' 00:16:56.008 07:26:18 -- nvmf/common.sh@478 -- # killprocess 81924 00:16:56.008 07:26:18 -- common/autotest_common.sh@936 -- # '[' -z 81924 ']' 00:16:56.008 07:26:18 -- common/autotest_common.sh@940 -- # kill -0 81924 00:16:56.008 07:26:18 -- common/autotest_common.sh@941 -- # uname 00:16:56.008 07:26:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:56.008 07:26:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81924 00:16:56.008 killing process with pid 81924 00:16:56.008 07:26:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:56.008 07:26:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:56.008 07:26:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81924' 00:16:56.008 07:26:18 -- common/autotest_common.sh@955 -- # kill 81924 00:16:56.008 07:26:18 -- common/autotest_common.sh@960 -- # wait 81924 00:16:56.575 07:26:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:56.575 07:26:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:56.575 07:26:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:56.575 07:26:18 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.575 07:26:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:56.575 07:26:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.575 07:26:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.575 07:26:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.575 07:26:18 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:56.575 ************************************ 00:16:56.575 END TEST nvmf_fio_host 00:16:56.575 ************************************ 00:16:56.575 00:16:56.575 real 0m19.815s 00:16:56.575 user 1m26.314s 00:16:56.575 sys 0m4.895s 00:16:56.575 07:26:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:56.575 07:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:56.575 07:26:18 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:56.575 07:26:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:56.575 07:26:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.575 07:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:56.575 ************************************ 00:16:56.575 START TEST nvmf_failover 00:16:56.575 ************************************ 00:16:56.575 07:26:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:56.575 * Looking for test storage... 00:16:56.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:56.575 07:26:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:56.575 07:26:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:56.575 07:26:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:56.575 07:26:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:56.575 07:26:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:56.575 07:26:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:56.575 07:26:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:56.575 07:26:18 -- scripts/common.sh@335 -- # IFS=.-: 00:16:56.575 07:26:18 -- scripts/common.sh@335 -- # read -ra ver1 00:16:56.575 07:26:18 -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.575 07:26:18 -- scripts/common.sh@336 -- # read -ra ver2 00:16:56.575 07:26:18 -- scripts/common.sh@337 -- # local 'op=<' 00:16:56.575 07:26:18 -- scripts/common.sh@339 -- # ver1_l=2 00:16:56.575 07:26:18 -- scripts/common.sh@340 -- # ver2_l=1 00:16:56.575 07:26:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:56.575 07:26:18 -- scripts/common.sh@343 -- # case "$op" in 00:16:56.575 07:26:18 -- scripts/common.sh@344 -- # : 1 00:16:56.575 07:26:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:56.576 07:26:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.576 07:26:18 -- scripts/common.sh@364 -- # decimal 1 00:16:56.576 07:26:18 -- scripts/common.sh@352 -- # local d=1 00:16:56.576 07:26:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.576 07:26:18 -- scripts/common.sh@354 -- # echo 1 00:16:56.576 07:26:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:56.576 07:26:18 -- scripts/common.sh@365 -- # decimal 2 00:16:56.576 07:26:18 -- scripts/common.sh@352 -- # local d=2 00:16:56.576 07:26:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.576 07:26:18 -- scripts/common.sh@354 -- # echo 2 00:16:56.576 07:26:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:56.576 07:26:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:56.576 07:26:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:56.576 07:26:18 -- scripts/common.sh@367 -- # return 0 00:16:56.576 07:26:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.576 07:26:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:56.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.576 --rc genhtml_branch_coverage=1 00:16:56.576 --rc genhtml_function_coverage=1 00:16:56.576 --rc genhtml_legend=1 00:16:56.576 --rc geninfo_all_blocks=1 00:16:56.576 --rc geninfo_unexecuted_blocks=1 00:16:56.576 00:16:56.576 ' 00:16:56.576 07:26:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:56.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.576 --rc genhtml_branch_coverage=1 00:16:56.576 --rc genhtml_function_coverage=1 00:16:56.576 --rc genhtml_legend=1 00:16:56.576 --rc geninfo_all_blocks=1 00:16:56.576 --rc geninfo_unexecuted_blocks=1 00:16:56.576 00:16:56.576 ' 00:16:56.576 07:26:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:56.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.576 --rc genhtml_branch_coverage=1 00:16:56.576 --rc genhtml_function_coverage=1 00:16:56.576 --rc genhtml_legend=1 00:16:56.576 --rc geninfo_all_blocks=1 00:16:56.576 --rc geninfo_unexecuted_blocks=1 00:16:56.576 00:16:56.576 ' 00:16:56.576 07:26:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:56.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.576 --rc genhtml_branch_coverage=1 00:16:56.576 --rc genhtml_function_coverage=1 00:16:56.576 --rc genhtml_legend=1 00:16:56.576 --rc geninfo_all_blocks=1 00:16:56.576 --rc geninfo_unexecuted_blocks=1 00:16:56.576 00:16:56.576 ' 00:16:56.576 07:26:18 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:56.576 07:26:18 -- nvmf/common.sh@7 -- # uname -s 00:16:56.576 07:26:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.576 07:26:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.576 07:26:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.576 07:26:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.576 07:26:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.576 07:26:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.576 07:26:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.576 07:26:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.576 07:26:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.576 07:26:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.576 07:26:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:16:56.576 07:26:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:16:56.576 07:26:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.576 07:26:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.576 07:26:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:56.576 07:26:18 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.576 07:26:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.576 07:26:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.576 07:26:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.576 07:26:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.576 07:26:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.576 07:26:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.576 07:26:18 -- paths/export.sh@5 -- # export PATH 00:16:56.576 07:26:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.576 07:26:18 -- nvmf/common.sh@46 -- # : 0 00:16:56.576 07:26:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:56.576 07:26:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:56.576 07:26:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:56.576 07:26:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.576 07:26:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.576 07:26:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:56.576 07:26:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:56.576 07:26:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:56.576 07:26:18 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.576 07:26:18 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.576 07:26:18 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:56.576 07:26:18 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:56.576 07:26:18 -- host/failover.sh@18 -- # nvmftestinit 00:16:56.576 07:26:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:56.576 07:26:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.576 07:26:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:56.576 07:26:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:56.576 07:26:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:56.576 07:26:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.576 07:26:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.576 07:26:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.576 07:26:18 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:56.576 07:26:18 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:56.835 07:26:18 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:56.835 07:26:18 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:56.835 07:26:18 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:56.835 07:26:18 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:56.835 07:26:18 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.835 07:26:18 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.835 07:26:18 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:56.835 07:26:18 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:56.835 07:26:18 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.835 07:26:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.835 07:26:18 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.835 07:26:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.835 07:26:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.835 07:26:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.835 07:26:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.835 07:26:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.835 07:26:18 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:56.835 07:26:18 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:56.835 Cannot find device "nvmf_tgt_br" 00:16:56.835 07:26:18 -- nvmf/common.sh@154 -- # true 00:16:56.835 07:26:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.835 Cannot find device "nvmf_tgt_br2" 00:16:56.835 07:26:18 -- nvmf/common.sh@155 -- # true 00:16:56.835 07:26:18 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:56.835 07:26:18 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:56.835 Cannot find device "nvmf_tgt_br" 00:16:56.835 07:26:18 -- nvmf/common.sh@157 -- # true 00:16:56.835 07:26:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:56.835 Cannot find device "nvmf_tgt_br2" 00:16:56.835 07:26:18 -- nvmf/common.sh@158 -- # true 00:16:56.835 07:26:18 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:56.835 07:26:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:56.835 07:26:18 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.835 07:26:18 -- nvmf/common.sh@161 -- # true 00:16:56.835 07:26:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.835 07:26:18 -- nvmf/common.sh@162 -- # true 00:16:56.835 07:26:18 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.835 07:26:18 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.835 07:26:18 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.835 07:26:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.835 07:26:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.835 07:26:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.835 07:26:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.835 07:26:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:56.835 07:26:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:56.836 07:26:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:56.836 07:26:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:56.836 07:26:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:56.836 07:26:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:56.836 07:26:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.836 07:26:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.836 07:26:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.836 07:26:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:56.836 07:26:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:56.836 07:26:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.836 07:26:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:57.094 07:26:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:57.095 07:26:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:57.095 07:26:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:57.095 07:26:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:57.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:57.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:57.095 00:16:57.095 --- 10.0.0.2 ping statistics --- 00:16:57.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.095 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:57.095 07:26:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:57.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:57.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:16:57.095 00:16:57.095 --- 10.0.0.3 ping statistics --- 00:16:57.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.095 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:57.095 07:26:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:57.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:57.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:57.095 00:16:57.095 --- 10.0.0.1 ping statistics --- 00:16:57.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:57.095 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:57.095 07:26:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:57.095 07:26:19 -- nvmf/common.sh@421 -- # return 0 00:16:57.095 07:26:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:57.095 07:26:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:57.095 07:26:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:57.095 07:26:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:57.095 07:26:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:57.095 07:26:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:57.095 07:26:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:57.095 07:26:19 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:57.095 07:26:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:57.095 07:26:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:57.095 07:26:19 -- common/autotest_common.sh@10 -- # set +x 00:16:57.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.095 07:26:19 -- nvmf/common.sh@469 -- # nvmfpid=82485 00:16:57.095 07:26:19 -- nvmf/common.sh@470 -- # waitforlisten 82485 00:16:57.095 07:26:19 -- common/autotest_common.sh@829 -- # '[' -z 82485 ']' 00:16:57.095 07:26:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:57.095 07:26:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.095 07:26:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.095 07:26:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.095 07:26:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.095 07:26:19 -- common/autotest_common.sh@10 -- # set +x 00:16:57.095 [2024-11-28 07:26:19.230951] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:57.095 [2024-11-28 07:26:19.231042] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.365 [2024-11-28 07:26:19.374798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:57.365 [2024-11-28 07:26:19.454700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.365 [2024-11-28 07:26:19.454888] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.365 [2024-11-28 07:26:19.454905] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.365 [2024-11-28 07:26:19.454917] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.365 [2024-11-28 07:26:19.455101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.365 [2024-11-28 07:26:19.455822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.365 [2024-11-28 07:26:19.455882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.955 07:26:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.955 07:26:20 -- common/autotest_common.sh@862 -- # return 0 00:16:57.955 07:26:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:57.955 07:26:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:57.955 07:26:20 -- common/autotest_common.sh@10 -- # set +x 00:16:57.955 07:26:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.955 07:26:20 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:58.523 [2024-11-28 07:26:20.494006] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.523 07:26:20 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:58.781 Malloc0 00:16:58.781 07:26:20 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:59.040 07:26:21 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:59.300 07:26:21 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.560 [2024-11-28 07:26:21.593249] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.560 07:26:21 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:59.818 [2024-11-28 07:26:21.877454] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:59.818 07:26:21 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:59.818 [2024-11-28 07:26:22.085749] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:00.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.078 07:26:22 -- host/failover.sh@31 -- # bdevperf_pid=82548 00:17:00.078 07:26:22 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:00.078 07:26:22 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:00.078 07:26:22 -- host/failover.sh@34 -- # waitforlisten 82548 /var/tmp/bdevperf.sock 00:17:00.078 07:26:22 -- common/autotest_common.sh@829 -- # '[' -z 82548 ']' 00:17:00.078 07:26:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.078 07:26:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.078 07:26:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.078 07:26:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.078 07:26:22 -- common/autotest_common.sh@10 -- # set +x 00:17:01.016 07:26:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:01.016 07:26:23 -- common/autotest_common.sh@862 -- # return 0 00:17:01.016 07:26:23 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:01.275 NVMe0n1 00:17:01.275 07:26:23 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:01.844 00:17:01.844 07:26:23 -- host/failover.sh@39 -- # run_test_pid=82577 00:17:01.844 07:26:23 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:01.844 07:26:23 -- host/failover.sh@41 -- # sleep 1 00:17:02.781 07:26:24 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:03.039 [2024-11-28 07:26:25.086012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.039 [2024-11-28 07:26:25.086083] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.039 [2024-11-28 07:26:25.086103] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086111] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086150] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086158] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086165] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086172] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086180] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086195] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086219] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086234] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086242] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086249] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086257] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086293] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 [2024-11-28 07:26:25.086307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17240 is same with the state(5) to be set 00:17:03.040 07:26:25 -- host/failover.sh@45 -- # sleep 3 00:17:06.328 07:26:28 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:06.328 00:17:06.328 07:26:28 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:06.586 [2024-11-28 07:26:28.739412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17e50 is same with the state(5) to be set 00:17:06.586 [2024-11-28 07:26:28.739513] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17e50 is same with the state(5) to be set 00:17:06.586 [2024-11-28 07:26:28.739537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17e50 is same with the state(5) to be set 00:17:06.586 [2024-11-28 07:26:28.739558] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17e50 is same with the state(5) to be set 00:17:06.586 [2024-11-28 07:26:28.739573] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa17e50 is same with the state(5) to be set 00:17:06.586 07:26:28 -- host/failover.sh@50 -- # sleep 3 00:17:09.871 07:26:31 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.871 [2024-11-28 07:26:32.052445] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.871 07:26:32 -- host/failover.sh@55 -- # sleep 1 00:17:11.247 07:26:33 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:11.247 [2024-11-28 07:26:33.342704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342749] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342785] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342800] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342808] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342816] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342823] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342838] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342853] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 [2024-11-28 07:26:33.342860] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbb550 is same with the state(5) to be set 00:17:11.247 07:26:33 -- host/failover.sh@59 -- # wait 82577 00:17:17.823 0 00:17:17.823 07:26:38 -- host/failover.sh@61 -- # killprocess 82548 00:17:17.823 07:26:38 -- common/autotest_common.sh@936 -- # '[' -z 82548 ']' 00:17:17.823 07:26:38 -- common/autotest_common.sh@940 -- # kill -0 82548 00:17:17.823 07:26:38 -- common/autotest_common.sh@941 -- # uname 00:17:17.823 07:26:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.823 07:26:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82548 00:17:17.823 07:26:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:17.823 07:26:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:17.823 07:26:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82548' 00:17:17.823 killing process with pid 82548 00:17:17.823 07:26:39 -- common/autotest_common.sh@955 -- # kill 82548 00:17:17.823 07:26:39 -- common/autotest_common.sh@960 -- # wait 82548 00:17:17.823 07:26:39 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:17.823 [2024-11-28 07:26:22.144110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:17.823 [2024-11-28 07:26:22.144208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82548 ] 00:17:17.823 [2024-11-28 07:26:22.277534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.823 [2024-11-28 07:26:22.347933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.823 Running I/O for 15 seconds... 00:17:17.823 [2024-11-28 07:26:25.086401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.823 [2024-11-28 07:26:25.086470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.823 [2024-11-28 07:26:25.086522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.823 [2024-11-28 07:26:25.086560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.823 [2024-11-28 07:26:25.086576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.823 [2024-11-28 07:26:25.086589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.823 [2024-11-28 07:26:25.086602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.823 [2024-11-28 07:26:25.086615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.823 [2024-11-28 07:26:25.086628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.823 [2024-11-28 07:26:25.086640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.823 [2024-11-28 07:26:25.086653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.823 [2024-11-28 07:26:25.086665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.823 [2024-11-28 07:26:25.086678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.823 [2024-11-28 07:26:25.086690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.823 [2024-11-28 07:26:25.086703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.823 [2024-11-28 07:26:25.086714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.823 [2024-11-28 07:26:25.086727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.823 [2024-11-28 07:26:25.086739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.823 [2024-11-28 07:26:25.086752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.823 [2024-11-28 07:26:25.086775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.823 [2024-11-28 07:26:25.086788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.823 [2024-11-28 07:26:25.086800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.823 [2024-11-28 07:26:25.086852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.086866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.086880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.086894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.086908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.086920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.086933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.086946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.086960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.086981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.086995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.087065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.087146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.087260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.087285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.087310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.087404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.087442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.087469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.087790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.087815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.824 [2024-11-28 07:26:25.087968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.087988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.088001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.088015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.824 [2024-11-28 07:26:25.088066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.824 [2024-11-28 07:26:25.088081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.088094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.088121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.088147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.088174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.088199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.088225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.088251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.088276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.088302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.088820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.088864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.088894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.088919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.088944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.088970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.088983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.088995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.089070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.089121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.089172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.089205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.089242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.089267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.089549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.089609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.825 [2024-11-28 07:26:25.089634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.825 [2024-11-28 07:26:25.089686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.825 [2024-11-28 07:26:25.089709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.089731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.089745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.826 [2024-11-28 07:26:25.089768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.089782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.089794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.089813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.089827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.089841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.089854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.089868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.089879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.089893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.089905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.089919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.089932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.089946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.089966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.089981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.089993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.826 [2024-11-28 07:26:25.090181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.826 [2024-11-28 07:26:25.090231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.826 [2024-11-28 07:26:25.090276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.826 [2024-11-28 07:26:25.090370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.826 [2024-11-28 07:26:25.090398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.826 [2024-11-28 07:26:25.090449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.826 [2024-11-28 07:26:25.090503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.826 [2024-11-28 07:26:25.090605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.826 [2024-11-28 07:26:25.090726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.826 [2024-11-28 07:26:25.090760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6007e0 is same with the state(5) to be set 00:17:17.826 [2024-11-28 07:26:25.090801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:17.826 [2024-11-28 07:26:25.090812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:17.826 [2024-11-28 07:26:25.090822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6528 len:8 PRP1 0x0 PRP2 0x0 00:17:17.826 [2024-11-28 07:26:25.090834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.826 [2024-11-28 07:26:25.090899] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6007e0 was disconnected and freed. reset controller. 00:17:17.826 [2024-11-28 07:26:25.090917] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:17.826 [2024-11-28 07:26:25.090970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.826 [2024-11-28 07:26:25.090990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:25.091004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.827 [2024-11-28 07:26:25.091016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:25.091028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.827 [2024-11-28 07:26:25.091040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:25.091053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.827 [2024-11-28 07:26:25.091064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:25.091076] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:17.827 [2024-11-28 07:26:25.091123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x603820 (9): Bad file descriptor 00:17:17.827 [2024-11-28 07:26:25.093368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:17.827 [2024-11-28 07:26:25.116279] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:17.827 [2024-11-28 07:26:28.739648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.739699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.739725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.739740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.739774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.739788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.739801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.739813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.739827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.739839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.739852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.739865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.739878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.827 [2024-11-28 07:26:28.739890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.739903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.739915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.739928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.739940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.739953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.739965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.739979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.739990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.827 [2024-11-28 07:26:28.740293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.827 [2024-11-28 07:26:28.740442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.827 [2024-11-28 07:26:28.740484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.827 [2024-11-28 07:26:28.740511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.827 [2024-11-28 07:26:28.740574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.827 [2024-11-28 07:26:28.740600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.827 [2024-11-28 07:26:28.740627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.827 [2024-11-28 07:26:28.740681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.827 [2024-11-28 07:26:28.740695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.740723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.740736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.740748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.740778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.740791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.740805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.740818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.740832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.740844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.740858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.740871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.740885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.740897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.740911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.740930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.740945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.740957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.740973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.740986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.828 [2024-11-28 07:26:28.741659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.828 [2024-11-28 07:26:28.741769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.828 [2024-11-28 07:26:28.741783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.741794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.741808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.741819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.741833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.741845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.741872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.741884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.741897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.741908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.741921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.741932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.741945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.741957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.741969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.741981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.741994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.742405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.742560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.742644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.742684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.742709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.829 [2024-11-28 07:26:28.742758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.742977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.742990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.829 [2024-11-28 07:26:28.743002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.829 [2024-11-28 07:26:28.743015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.830 [2024-11-28 07:26:28.743167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.830 [2024-11-28 07:26:28.743198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.830 [2024-11-28 07:26:28.743227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.830 [2024-11-28 07:26:28.743256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.830 [2024-11-28 07:26:28.743285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.830 [2024-11-28 07:26:28.743352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.830 [2024-11-28 07:26:28.743381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.830 [2024-11-28 07:26:28.743472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:28.743714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x601440 is same with the state(5) to be set 00:17:17.830 [2024-11-28 07:26:28.743747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:17.830 [2024-11-28 07:26:28.743758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:17.830 [2024-11-28 07:26:28.743768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114624 len:8 PRP1 0x0 PRP2 0x0 00:17:17.830 [2024-11-28 07:26:28.743779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743852] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x601440 was disconnected and freed. reset controller. 00:17:17.830 [2024-11-28 07:26:28.743869] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:17.830 [2024-11-28 07:26:28.743919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.830 [2024-11-28 07:26:28.743939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.830 [2024-11-28 07:26:28.743965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.743977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.830 [2024-11-28 07:26:28.743990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.744002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.830 [2024-11-28 07:26:28.744014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:28.744055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:17.830 [2024-11-28 07:26:28.746508] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:17.830 [2024-11-28 07:26:28.746547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x603820 (9): Bad file descriptor 00:17:17.830 [2024-11-28 07:26:28.775977] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:17.830 [2024-11-28 07:26:33.342945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:33.343009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:33.343038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:33.343054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:33.343071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:33.343085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:33.343102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:33.343117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:33.343132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:33.343170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:33.343188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:33.343202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:33.343218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:33.343232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:33.343247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:33.343261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:33.343276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:33.343290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.830 [2024-11-28 07:26:33.343328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.830 [2024-11-28 07:26:33.343345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.831 [2024-11-28 07:26:33.343593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.831 [2024-11-28 07:26:33.343624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.831 [2024-11-28 07:26:33.343944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.343982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.343997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.831 [2024-11-28 07:26:33.344055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.831 [2024-11-28 07:26:33.344405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.831 [2024-11-28 07:26:33.344435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.831 [2024-11-28 07:26:33.344465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.831 [2024-11-28 07:26:33.344494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.831 [2024-11-28 07:26:33.344524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.831 [2024-11-28 07:26:33.344569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.831 [2024-11-28 07:26:33.344583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.832 [2024-11-28 07:26:33.344612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.344641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.344671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.344700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.344729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.344767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.344797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.344828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.344857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.344886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.832 [2024-11-28 07:26:33.344915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.832 [2024-11-28 07:26:33.344945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.344975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.344990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.832 [2024-11-28 07:26:33.345004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.832 [2024-11-28 07:26:33.345092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.832 [2024-11-28 07:26:33.345122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.832 [2024-11-28 07:26:33.345536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.832 [2024-11-28 07:26:33.345567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.832 [2024-11-28 07:26:33.345596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.832 [2024-11-28 07:26:33.345626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.832 [2024-11-28 07:26:33.345641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.832 [2024-11-28 07:26:33.345655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.345670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.345684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.345699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.345713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.345729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.345743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.345758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.345772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.345787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.345801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.345817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.345830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.345846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.345860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.345877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.345891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.345913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.345928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.345943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.345958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.345974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.345988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:17.833 [2024-11-28 07:26:33.346740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.833 [2024-11-28 07:26:33.346873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.833 [2024-11-28 07:26:33.346894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-11-28 07:26:33.346911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.834 [2024-11-28 07:26:33.346925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-11-28 07:26:33.346941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:17.834 [2024-11-28 07:26:33.346955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-11-28 07:26:33.346970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x627b00 is same with the state(5) to be set 00:17:17.834 [2024-11-28 07:26:33.346987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:17.834 [2024-11-28 07:26:33.346998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:17.834 [2024-11-28 07:26:33.347009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24568 len:8 PRP1 0x0 PRP2 0x0 00:17:17.834 [2024-11-28 07:26:33.347023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-11-28 07:26:33.347082] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x627b00 was disconnected and freed. reset controller. 00:17:17.834 [2024-11-28 07:26:33.347100] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:17.834 [2024-11-28 07:26:33.347159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.834 [2024-11-28 07:26:33.347181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-11-28 07:26:33.347207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.834 [2024-11-28 07:26:33.347222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-11-28 07:26:33.347236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.834 [2024-11-28 07:26:33.347250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-11-28 07:26:33.347264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:17.834 [2024-11-28 07:26:33.347277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:17.834 [2024-11-28 07:26:33.347292] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:17.834 [2024-11-28 07:26:33.349800] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:17.834 [2024-11-28 07:26:33.349843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x603820 (9): Bad file descriptor 00:17:17.834 [2024-11-28 07:26:33.383300] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:17.834 00:17:17.834 Latency(us) 00:17:17.834 [2024-11-28T07:26:40.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.834 [2024-11-28T07:26:40.109Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:17.834 Verification LBA range: start 0x0 length 0x4000 00:17:17.834 NVMe0n1 : 15.01 13129.27 51.29 305.77 0.00 9510.58 644.19 17396.83 00:17:17.834 [2024-11-28T07:26:40.109Z] =================================================================================================================== 00:17:17.834 [2024-11-28T07:26:40.109Z] Total : 13129.27 51.29 305.77 0.00 9510.58 644.19 17396.83 00:17:17.834 Received shutdown signal, test time was about 15.000000 seconds 00:17:17.834 00:17:17.834 Latency(us) 00:17:17.834 [2024-11-28T07:26:40.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.834 [2024-11-28T07:26:40.109Z] =================================================================================================================== 00:17:17.834 [2024-11-28T07:26:40.109Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:17.834 07:26:39 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:17.834 07:26:39 -- host/failover.sh@65 -- # count=3 00:17:17.834 07:26:39 -- host/failover.sh@67 -- # (( count != 3 )) 00:17:17.834 07:26:39 -- host/failover.sh@73 -- # bdevperf_pid=82753 00:17:17.834 07:26:39 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:17.834 07:26:39 -- host/failover.sh@75 -- # waitforlisten 82753 /var/tmp/bdevperf.sock 00:17:17.834 07:26:39 -- common/autotest_common.sh@829 -- # '[' -z 82753 ']' 00:17:17.834 07:26:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.834 07:26:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.834 07:26:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.834 07:26:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.834 07:26:39 -- common/autotest_common.sh@10 -- # set +x 00:17:18.122 07:26:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.122 07:26:40 -- common/autotest_common.sh@862 -- # return 0 00:17:18.122 07:26:40 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:18.380 [2024-11-28 07:26:40.635689] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:18.640 07:26:40 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:18.640 [2024-11-28 07:26:40.875776] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:18.640 07:26:40 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:19.210 NVMe0n1 00:17:19.210 07:26:41 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:19.469 00:17:19.469 07:26:41 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:19.728 00:17:19.728 07:26:41 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:19.728 07:26:41 -- host/failover.sh@82 -- # grep -q NVMe0 00:17:19.988 07:26:42 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:20.247 07:26:42 -- host/failover.sh@87 -- # sleep 3 00:17:23.538 07:26:45 -- host/failover.sh@88 -- # grep -q NVMe0 00:17:23.538 07:26:45 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:23.538 07:26:45 -- host/failover.sh@90 -- # run_test_pid=82830 00:17:23.538 07:26:45 -- host/failover.sh@92 -- # wait 82830 00:17:23.538 07:26:45 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:24.917 0 00:17:24.917 07:26:46 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:24.917 [2024-11-28 07:26:39.344608] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:24.917 [2024-11-28 07:26:39.344729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82753 ] 00:17:24.917 [2024-11-28 07:26:39.485495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.917 [2024-11-28 07:26:39.560055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.917 [2024-11-28 07:26:42.343003] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:24.917 [2024-11-28 07:26:42.343135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.917 [2024-11-28 07:26:42.343161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.917 [2024-11-28 07:26:42.343179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.917 [2024-11-28 07:26:42.343194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.917 [2024-11-28 07:26:42.343215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.917 [2024-11-28 07:26:42.343229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.917 [2024-11-28 07:26:42.343243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:24.917 [2024-11-28 07:26:42.343257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:24.917 [2024-11-28 07:26:42.343272] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:24.917 [2024-11-28 07:26:42.343344] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:24.917 [2024-11-28 07:26:42.343383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ccf820 (9): Bad file descriptor 00:17:24.917 [2024-11-28 07:26:42.353559] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:24.917 Running I/O for 1 seconds... 00:17:24.917 00:17:24.917 Latency(us) 00:17:24.918 [2024-11-28T07:26:47.193Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.918 [2024-11-28T07:26:47.193Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:24.918 Verification LBA range: start 0x0 length 0x4000 00:17:24.918 NVMe0n1 : 1.01 14075.81 54.98 0.00 0.00 9046.83 1087.30 10247.45 00:17:24.918 [2024-11-28T07:26:47.193Z] =================================================================================================================== 00:17:24.918 [2024-11-28T07:26:47.193Z] Total : 14075.81 54.98 0.00 0.00 9046.83 1087.30 10247.45 00:17:24.918 07:26:46 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:24.918 07:26:46 -- host/failover.sh@95 -- # grep -q NVMe0 00:17:24.918 07:26:47 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:25.177 07:26:47 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:25.177 07:26:47 -- host/failover.sh@99 -- # grep -q NVMe0 00:17:25.437 07:26:47 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:25.697 07:26:47 -- host/failover.sh@101 -- # sleep 3 00:17:28.989 07:26:50 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:28.989 07:26:50 -- host/failover.sh@103 -- # grep -q NVMe0 00:17:28.989 07:26:51 -- host/failover.sh@108 -- # killprocess 82753 00:17:28.989 07:26:51 -- common/autotest_common.sh@936 -- # '[' -z 82753 ']' 00:17:28.989 07:26:51 -- common/autotest_common.sh@940 -- # kill -0 82753 00:17:28.989 07:26:51 -- common/autotest_common.sh@941 -- # uname 00:17:28.989 07:26:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:28.989 07:26:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82753 00:17:28.989 07:26:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:28.989 killing process with pid 82753 00:17:28.989 07:26:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:28.989 07:26:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82753' 00:17:28.989 07:26:51 -- common/autotest_common.sh@955 -- # kill 82753 00:17:28.989 07:26:51 -- common/autotest_common.sh@960 -- # wait 82753 00:17:29.249 07:26:51 -- host/failover.sh@110 -- # sync 00:17:29.249 07:26:51 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.509 07:26:51 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:29.509 07:26:51 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:29.509 07:26:51 -- host/failover.sh@116 -- # nvmftestfini 00:17:29.509 07:26:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:29.509 07:26:51 -- nvmf/common.sh@116 -- # sync 00:17:29.509 07:26:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:29.509 07:26:51 -- nvmf/common.sh@119 -- # set +e 00:17:29.509 07:26:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:29.509 07:26:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:29.509 rmmod nvme_tcp 00:17:29.509 rmmod nvme_fabrics 00:17:29.509 rmmod nvme_keyring 00:17:29.509 07:26:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:29.509 07:26:51 -- nvmf/common.sh@123 -- # set -e 00:17:29.509 07:26:51 -- nvmf/common.sh@124 -- # return 0 00:17:29.509 07:26:51 -- nvmf/common.sh@477 -- # '[' -n 82485 ']' 00:17:29.509 07:26:51 -- nvmf/common.sh@478 -- # killprocess 82485 00:17:29.509 07:26:51 -- common/autotest_common.sh@936 -- # '[' -z 82485 ']' 00:17:29.509 07:26:51 -- common/autotest_common.sh@940 -- # kill -0 82485 00:17:29.510 07:26:51 -- common/autotest_common.sh@941 -- # uname 00:17:29.510 07:26:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.510 07:26:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82485 00:17:29.510 07:26:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:29.510 07:26:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:29.510 killing process with pid 82485 00:17:29.510 07:26:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82485' 00:17:29.510 07:26:51 -- common/autotest_common.sh@955 -- # kill 82485 00:17:29.510 07:26:51 -- common/autotest_common.sh@960 -- # wait 82485 00:17:30.080 07:26:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:30.080 07:26:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:30.080 07:26:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:30.080 07:26:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.080 07:26:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:30.080 07:26:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.080 07:26:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.080 07:26:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.080 07:26:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:30.080 00:17:30.080 real 0m33.499s 00:17:30.080 user 2m9.946s 00:17:30.080 sys 0m5.549s 00:17:30.080 07:26:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:30.080 07:26:52 -- common/autotest_common.sh@10 -- # set +x 00:17:30.080 ************************************ 00:17:30.080 END TEST nvmf_failover 00:17:30.080 ************************************ 00:17:30.080 07:26:52 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:30.080 07:26:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:30.080 07:26:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:30.080 07:26:52 -- common/autotest_common.sh@10 -- # set +x 00:17:30.080 ************************************ 00:17:30.080 START TEST nvmf_discovery 00:17:30.080 ************************************ 00:17:30.080 07:26:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:30.080 * Looking for test storage... 00:17:30.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:30.080 07:26:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:30.080 07:26:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:30.080 07:26:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:30.080 07:26:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:30.080 07:26:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:30.080 07:26:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:30.080 07:26:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:30.080 07:26:52 -- scripts/common.sh@335 -- # IFS=.-: 00:17:30.080 07:26:52 -- scripts/common.sh@335 -- # read -ra ver1 00:17:30.080 07:26:52 -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.080 07:26:52 -- scripts/common.sh@336 -- # read -ra ver2 00:17:30.080 07:26:52 -- scripts/common.sh@337 -- # local 'op=<' 00:17:30.080 07:26:52 -- scripts/common.sh@339 -- # ver1_l=2 00:17:30.080 07:26:52 -- scripts/common.sh@340 -- # ver2_l=1 00:17:30.080 07:26:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:30.080 07:26:52 -- scripts/common.sh@343 -- # case "$op" in 00:17:30.080 07:26:52 -- scripts/common.sh@344 -- # : 1 00:17:30.080 07:26:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:30.080 07:26:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.080 07:26:52 -- scripts/common.sh@364 -- # decimal 1 00:17:30.080 07:26:52 -- scripts/common.sh@352 -- # local d=1 00:17:30.080 07:26:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.080 07:26:52 -- scripts/common.sh@354 -- # echo 1 00:17:30.080 07:26:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:30.080 07:26:52 -- scripts/common.sh@365 -- # decimal 2 00:17:30.080 07:26:52 -- scripts/common.sh@352 -- # local d=2 00:17:30.080 07:26:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.080 07:26:52 -- scripts/common.sh@354 -- # echo 2 00:17:30.080 07:26:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:30.080 07:26:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:30.080 07:26:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:30.080 07:26:52 -- scripts/common.sh@367 -- # return 0 00:17:30.080 07:26:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.080 07:26:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:30.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.080 --rc genhtml_branch_coverage=1 00:17:30.080 --rc genhtml_function_coverage=1 00:17:30.080 --rc genhtml_legend=1 00:17:30.080 --rc geninfo_all_blocks=1 00:17:30.080 --rc geninfo_unexecuted_blocks=1 00:17:30.080 00:17:30.080 ' 00:17:30.080 07:26:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:30.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.080 --rc genhtml_branch_coverage=1 00:17:30.080 --rc genhtml_function_coverage=1 00:17:30.080 --rc genhtml_legend=1 00:17:30.080 --rc geninfo_all_blocks=1 00:17:30.080 --rc geninfo_unexecuted_blocks=1 00:17:30.080 00:17:30.080 ' 00:17:30.080 07:26:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:30.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.080 --rc genhtml_branch_coverage=1 00:17:30.080 --rc genhtml_function_coverage=1 00:17:30.080 --rc genhtml_legend=1 00:17:30.080 --rc geninfo_all_blocks=1 00:17:30.080 --rc geninfo_unexecuted_blocks=1 00:17:30.080 00:17:30.080 ' 00:17:30.080 07:26:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:30.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.080 --rc genhtml_branch_coverage=1 00:17:30.080 --rc genhtml_function_coverage=1 00:17:30.080 --rc genhtml_legend=1 00:17:30.080 --rc geninfo_all_blocks=1 00:17:30.080 --rc geninfo_unexecuted_blocks=1 00:17:30.080 00:17:30.080 ' 00:17:30.080 07:26:52 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:30.080 07:26:52 -- nvmf/common.sh@7 -- # uname -s 00:17:30.080 07:26:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.080 07:26:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.080 07:26:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.080 07:26:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.080 07:26:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.080 07:26:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.080 07:26:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.080 07:26:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.080 07:26:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.080 07:26:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.080 07:26:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:17:30.080 07:26:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:17:30.080 07:26:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.080 07:26:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.080 07:26:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:30.341 07:26:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:30.341 07:26:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.341 07:26:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.341 07:26:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.341 07:26:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.341 07:26:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.341 07:26:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.341 07:26:52 -- paths/export.sh@5 -- # export PATH 00:17:30.341 07:26:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.341 07:26:52 -- nvmf/common.sh@46 -- # : 0 00:17:30.341 07:26:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:30.341 07:26:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:30.341 07:26:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:30.341 07:26:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.341 07:26:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.341 07:26:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:30.341 07:26:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:30.341 07:26:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:30.341 07:26:52 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:30.341 07:26:52 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:30.341 07:26:52 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:30.341 07:26:52 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:30.341 07:26:52 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:30.341 07:26:52 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:30.341 07:26:52 -- host/discovery.sh@25 -- # nvmftestinit 00:17:30.341 07:26:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:30.341 07:26:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.341 07:26:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:30.341 07:26:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:30.341 07:26:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:30.341 07:26:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.341 07:26:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.341 07:26:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.341 07:26:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:30.341 07:26:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:30.341 07:26:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:30.341 07:26:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:30.341 07:26:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:30.341 07:26:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:30.341 07:26:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.341 07:26:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.341 07:26:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:30.341 07:26:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:30.341 07:26:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:30.341 07:26:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:30.341 07:26:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:30.341 07:26:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.341 07:26:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:30.341 07:26:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:30.341 07:26:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:30.341 07:26:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:30.341 07:26:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:30.341 07:26:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:30.341 Cannot find device "nvmf_tgt_br" 00:17:30.341 07:26:52 -- nvmf/common.sh@154 -- # true 00:17:30.341 07:26:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:30.341 Cannot find device "nvmf_tgt_br2" 00:17:30.341 07:26:52 -- nvmf/common.sh@155 -- # true 00:17:30.341 07:26:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:30.341 07:26:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:30.341 Cannot find device "nvmf_tgt_br" 00:17:30.341 07:26:52 -- nvmf/common.sh@157 -- # true 00:17:30.341 07:26:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:30.341 Cannot find device "nvmf_tgt_br2" 00:17:30.341 07:26:52 -- nvmf/common.sh@158 -- # true 00:17:30.341 07:26:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:30.341 07:26:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:30.341 07:26:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:30.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.341 07:26:52 -- nvmf/common.sh@161 -- # true 00:17:30.341 07:26:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:30.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.341 07:26:52 -- nvmf/common.sh@162 -- # true 00:17:30.341 07:26:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:30.341 07:26:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:30.341 07:26:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:30.341 07:26:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:30.342 07:26:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:30.342 07:26:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:30.342 07:26:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:30.342 07:26:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:30.342 07:26:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:30.342 07:26:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:30.342 07:26:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:30.342 07:26:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:30.601 07:26:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:30.601 07:26:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:30.601 07:26:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:30.601 07:26:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:30.601 07:26:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:30.601 07:26:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:30.601 07:26:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:30.601 07:26:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:30.601 07:26:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:30.601 07:26:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:30.601 07:26:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:30.601 07:26:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:30.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:17:30.601 00:17:30.601 --- 10.0.0.2 ping statistics --- 00:17:30.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.601 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:30.601 07:26:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:30.601 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:30.601 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:17:30.601 00:17:30.601 --- 10.0.0.3 ping statistics --- 00:17:30.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.601 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:17:30.601 07:26:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:30.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:17:30.601 00:17:30.601 --- 10.0.0.1 ping statistics --- 00:17:30.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.601 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:30.601 07:26:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.601 07:26:52 -- nvmf/common.sh@421 -- # return 0 00:17:30.601 07:26:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:30.601 07:26:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.601 07:26:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:30.601 07:26:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:30.601 07:26:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.601 07:26:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:30.601 07:26:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:30.601 07:26:52 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:30.601 07:26:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:30.601 07:26:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:30.601 07:26:52 -- common/autotest_common.sh@10 -- # set +x 00:17:30.601 07:26:52 -- nvmf/common.sh@469 -- # nvmfpid=83109 00:17:30.601 07:26:52 -- nvmf/common.sh@470 -- # waitforlisten 83109 00:17:30.601 07:26:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.601 07:26:52 -- common/autotest_common.sh@829 -- # '[' -z 83109 ']' 00:17:30.601 07:26:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.601 07:26:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.601 07:26:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.602 07:26:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.602 07:26:52 -- common/autotest_common.sh@10 -- # set +x 00:17:30.602 [2024-11-28 07:26:52.789344] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:30.602 [2024-11-28 07:26:52.789447] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.860 [2024-11-28 07:26:52.927509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.860 [2024-11-28 07:26:53.017140] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:30.860 [2024-11-28 07:26:53.017335] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.860 [2024-11-28 07:26:53.017352] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.861 [2024-11-28 07:26:53.017364] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.861 [2024-11-28 07:26:53.017395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.430 07:26:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.430 07:26:53 -- common/autotest_common.sh@862 -- # return 0 00:17:31.430 07:26:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:31.430 07:26:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:31.430 07:26:53 -- common/autotest_common.sh@10 -- # set +x 00:17:31.689 07:26:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.689 07:26:53 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.689 07:26:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.689 07:26:53 -- common/autotest_common.sh@10 -- # set +x 00:17:31.689 [2024-11-28 07:26:53.740135] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.689 07:26:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.689 07:26:53 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:31.689 07:26:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.689 07:26:53 -- common/autotest_common.sh@10 -- # set +x 00:17:31.689 [2024-11-28 07:26:53.748273] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:31.689 07:26:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.689 07:26:53 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:31.689 07:26:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.689 07:26:53 -- common/autotest_common.sh@10 -- # set +x 00:17:31.689 null0 00:17:31.689 07:26:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.689 07:26:53 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:31.689 07:26:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.689 07:26:53 -- common/autotest_common.sh@10 -- # set +x 00:17:31.689 null1 00:17:31.689 07:26:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.689 07:26:53 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:31.689 07:26:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.689 07:26:53 -- common/autotest_common.sh@10 -- # set +x 00:17:31.689 07:26:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.689 07:26:53 -- host/discovery.sh@45 -- # hostpid=83140 00:17:31.689 07:26:53 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:31.689 07:26:53 -- host/discovery.sh@46 -- # waitforlisten 83140 /tmp/host.sock 00:17:31.689 07:26:53 -- common/autotest_common.sh@829 -- # '[' -z 83140 ']' 00:17:31.689 07:26:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:31.689 07:26:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.689 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:31.689 07:26:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:31.689 07:26:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.689 07:26:53 -- common/autotest_common.sh@10 -- # set +x 00:17:31.689 [2024-11-28 07:26:53.830858] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:31.689 [2024-11-28 07:26:53.830956] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83140 ] 00:17:31.948 [2024-11-28 07:26:53.973444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.948 [2024-11-28 07:26:54.082371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:31.948 [2024-11-28 07:26:54.082562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.889 07:26:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.889 07:26:54 -- common/autotest_common.sh@862 -- # return 0 00:17:32.889 07:26:54 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.889 07:26:54 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:32.889 07:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.889 07:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:32.889 07:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.889 07:26:54 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:32.889 07:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.889 07:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:32.889 07:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.889 07:26:54 -- host/discovery.sh@72 -- # notify_id=0 00:17:32.889 07:26:54 -- host/discovery.sh@78 -- # get_subsystem_names 00:17:32.889 07:26:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:32.889 07:26:54 -- host/discovery.sh@59 -- # sort 00:17:32.889 07:26:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:32.889 07:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.889 07:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:32.889 07:26:54 -- host/discovery.sh@59 -- # xargs 00:17:32.889 07:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.889 07:26:54 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:17:32.889 07:26:54 -- host/discovery.sh@79 -- # get_bdev_list 00:17:32.889 07:26:54 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:32.889 07:26:54 -- host/discovery.sh@55 -- # sort 00:17:32.889 07:26:54 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:32.889 07:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.889 07:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:32.889 07:26:54 -- host/discovery.sh@55 -- # xargs 00:17:32.889 07:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.889 07:26:54 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:17:32.889 07:26:54 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:32.889 07:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.889 07:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:32.889 07:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.889 07:26:54 -- host/discovery.sh@82 -- # get_subsystem_names 00:17:32.889 07:26:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:32.889 07:26:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:32.889 07:26:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.889 07:26:54 -- common/autotest_common.sh@10 -- # set +x 00:17:32.889 07:26:54 -- host/discovery.sh@59 -- # xargs 00:17:32.889 07:26:54 -- host/discovery.sh@59 -- # sort 00:17:32.889 07:26:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.889 07:26:55 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:17:32.889 07:26:55 -- host/discovery.sh@83 -- # get_bdev_list 00:17:32.889 07:26:55 -- host/discovery.sh@55 -- # sort 00:17:32.889 07:26:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:32.889 07:26:55 -- host/discovery.sh@55 -- # xargs 00:17:32.889 07:26:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:32.889 07:26:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.889 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:17:32.889 07:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.889 07:26:55 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:32.889 07:26:55 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:32.889 07:26:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.889 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:17:32.889 07:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.889 07:26:55 -- host/discovery.sh@86 -- # get_subsystem_names 00:17:32.889 07:26:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:32.889 07:26:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:32.889 07:26:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.889 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:17:32.889 07:26:55 -- host/discovery.sh@59 -- # sort 00:17:32.889 07:26:55 -- host/discovery.sh@59 -- # xargs 00:17:32.889 07:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.889 07:26:55 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:17:32.889 07:26:55 -- host/discovery.sh@87 -- # get_bdev_list 00:17:32.889 07:26:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:32.889 07:26:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:32.889 07:26:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.889 07:26:55 -- host/discovery.sh@55 -- # sort 00:17:32.889 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:17:32.889 07:26:55 -- host/discovery.sh@55 -- # xargs 00:17:32.889 07:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.148 07:26:55 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:33.148 07:26:55 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:33.148 07:26:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.148 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:17:33.148 [2024-11-28 07:26:55.184730] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.148 07:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.148 07:26:55 -- host/discovery.sh@92 -- # get_subsystem_names 00:17:33.148 07:26:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:33.148 07:26:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:33.148 07:26:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.148 07:26:55 -- host/discovery.sh@59 -- # sort 00:17:33.148 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:17:33.148 07:26:55 -- host/discovery.sh@59 -- # xargs 00:17:33.148 07:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.148 07:26:55 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:33.148 07:26:55 -- host/discovery.sh@93 -- # get_bdev_list 00:17:33.148 07:26:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.148 07:26:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.148 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:17:33.148 07:26:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:33.148 07:26:55 -- host/discovery.sh@55 -- # sort 00:17:33.148 07:26:55 -- host/discovery.sh@55 -- # xargs 00:17:33.148 07:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.148 07:26:55 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:17:33.148 07:26:55 -- host/discovery.sh@94 -- # get_notification_count 00:17:33.148 07:26:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:33.148 07:26:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.148 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:17:33.148 07:26:55 -- host/discovery.sh@74 -- # jq '. | length' 00:17:33.148 07:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.148 07:26:55 -- host/discovery.sh@74 -- # notification_count=0 00:17:33.148 07:26:55 -- host/discovery.sh@75 -- # notify_id=0 00:17:33.148 07:26:55 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:17:33.148 07:26:55 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:33.148 07:26:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.148 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:17:33.148 07:26:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.148 07:26:55 -- host/discovery.sh@100 -- # sleep 1 00:17:33.717 [2024-11-28 07:26:55.833244] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:33.717 [2024-11-28 07:26:55.833294] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:33.717 [2024-11-28 07:26:55.833321] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:33.717 [2024-11-28 07:26:55.839282] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:33.717 [2024-11-28 07:26:55.895563] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:33.717 [2024-11-28 07:26:55.895595] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:34.284 07:26:56 -- host/discovery.sh@101 -- # get_subsystem_names 00:17:34.284 07:26:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:34.284 07:26:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:34.284 07:26:56 -- host/discovery.sh@59 -- # sort 00:17:34.284 07:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.284 07:26:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.284 07:26:56 -- host/discovery.sh@59 -- # xargs 00:17:34.284 07:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.284 07:26:56 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.284 07:26:56 -- host/discovery.sh@102 -- # get_bdev_list 00:17:34.284 07:26:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:34.284 07:26:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:34.284 07:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.284 07:26:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.284 07:26:56 -- host/discovery.sh@55 -- # sort 00:17:34.284 07:26:56 -- host/discovery.sh@55 -- # xargs 00:17:34.284 07:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.284 07:26:56 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:34.284 07:26:56 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:17:34.284 07:26:56 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:34.284 07:26:56 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:34.284 07:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.284 07:26:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.284 07:26:56 -- host/discovery.sh@63 -- # sort -n 00:17:34.284 07:26:56 -- host/discovery.sh@63 -- # xargs 00:17:34.285 07:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.285 07:26:56 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:17:34.285 07:26:56 -- host/discovery.sh@104 -- # get_notification_count 00:17:34.285 07:26:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:34.285 07:26:56 -- host/discovery.sh@74 -- # jq '. | length' 00:17:34.285 07:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.285 07:26:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.285 07:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.544 07:26:56 -- host/discovery.sh@74 -- # notification_count=1 00:17:34.544 07:26:56 -- host/discovery.sh@75 -- # notify_id=1 00:17:34.544 07:26:56 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:17:34.544 07:26:56 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:34.544 07:26:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.544 07:26:56 -- common/autotest_common.sh@10 -- # set +x 00:17:34.544 07:26:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.544 07:26:56 -- host/discovery.sh@109 -- # sleep 1 00:17:35.484 07:26:57 -- host/discovery.sh@110 -- # get_bdev_list 00:17:35.484 07:26:57 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.484 07:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.484 07:26:57 -- common/autotest_common.sh@10 -- # set +x 00:17:35.484 07:26:57 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:35.484 07:26:57 -- host/discovery.sh@55 -- # xargs 00:17:35.484 07:26:57 -- host/discovery.sh@55 -- # sort 00:17:35.484 07:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.484 07:26:57 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:35.484 07:26:57 -- host/discovery.sh@111 -- # get_notification_count 00:17:35.484 07:26:57 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:35.484 07:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.484 07:26:57 -- common/autotest_common.sh@10 -- # set +x 00:17:35.484 07:26:57 -- host/discovery.sh@74 -- # jq '. | length' 00:17:35.484 07:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.484 07:26:57 -- host/discovery.sh@74 -- # notification_count=1 00:17:35.484 07:26:57 -- host/discovery.sh@75 -- # notify_id=2 00:17:35.484 07:26:57 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:17:35.484 07:26:57 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:35.484 07:26:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.484 07:26:57 -- common/autotest_common.sh@10 -- # set +x 00:17:35.484 [2024-11-28 07:26:57.703893] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:35.484 [2024-11-28 07:26:57.704537] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:35.484 [2024-11-28 07:26:57.704573] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:35.484 07:26:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.484 07:26:57 -- host/discovery.sh@117 -- # sleep 1 00:17:35.484 [2024-11-28 07:26:57.710512] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:35.750 [2024-11-28 07:26:57.767814] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:35.750 [2024-11-28 07:26:57.767842] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:35.750 [2024-11-28 07:26:57.767849] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:36.687 07:26:58 -- host/discovery.sh@118 -- # get_subsystem_names 00:17:36.687 07:26:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:36.687 07:26:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:36.687 07:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.687 07:26:58 -- host/discovery.sh@59 -- # sort 00:17:36.687 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:17:36.687 07:26:58 -- host/discovery.sh@59 -- # xargs 00:17:36.687 07:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.687 07:26:58 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.687 07:26:58 -- host/discovery.sh@119 -- # get_bdev_list 00:17:36.687 07:26:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:36.687 07:26:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:36.687 07:26:58 -- host/discovery.sh@55 -- # sort 00:17:36.687 07:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.687 07:26:58 -- host/discovery.sh@55 -- # xargs 00:17:36.687 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:17:36.687 07:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.687 07:26:58 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:36.687 07:26:58 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:17:36.687 07:26:58 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:36.687 07:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.687 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:17:36.687 07:26:58 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:36.687 07:26:58 -- host/discovery.sh@63 -- # sort -n 00:17:36.687 07:26:58 -- host/discovery.sh@63 -- # xargs 00:17:36.687 07:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.688 07:26:58 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:36.688 07:26:58 -- host/discovery.sh@121 -- # get_notification_count 00:17:36.688 07:26:58 -- host/discovery.sh@74 -- # jq '. | length' 00:17:36.688 07:26:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:36.688 07:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.688 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:17:36.688 07:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.688 07:26:58 -- host/discovery.sh@74 -- # notification_count=0 00:17:36.688 07:26:58 -- host/discovery.sh@75 -- # notify_id=2 00:17:36.688 07:26:58 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:17:36.688 07:26:58 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:36.688 07:26:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.688 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:17:36.688 [2024-11-28 07:26:58.935040] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:36.688 [2024-11-28 07:26:58.935084] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:36.688 07:26:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.688 07:26:58 -- host/discovery.sh@127 -- # sleep 1 00:17:36.688 [2024-11-28 07:26:58.941019] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:36.688 [2024-11-28 07:26:58.941052] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:36.688 [2024-11-28 07:26:58.941165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.688 [2024-11-28 07:26:58.941198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.688 [2024-11-28 07:26:58.941211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.688 [2024-11-28 07:26:58.941220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.688 [2024-11-28 07:26:58.941230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.688 [2024-11-28 07:26:58.941239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.688 [2024-11-28 07:26:58.941248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:36.688 [2024-11-28 07:26:58.941257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:36.688 [2024-11-28 07:26:58.941266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18d71f0 is same with the state(5) to be set 00:17:38.067 07:26:59 -- host/discovery.sh@128 -- # get_subsystem_names 00:17:38.067 07:26:59 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:38.067 07:26:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.067 07:26:59 -- common/autotest_common.sh@10 -- # set +x 00:17:38.067 07:26:59 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:38.067 07:26:59 -- host/discovery.sh@59 -- # sort 00:17:38.067 07:26:59 -- host/discovery.sh@59 -- # xargs 00:17:38.067 07:26:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.067 07:26:59 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.067 07:27:00 -- host/discovery.sh@129 -- # get_bdev_list 00:17:38.067 07:27:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:38.067 07:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.067 07:27:00 -- common/autotest_common.sh@10 -- # set +x 00:17:38.067 07:27:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:38.067 07:27:00 -- host/discovery.sh@55 -- # sort 00:17:38.067 07:27:00 -- host/discovery.sh@55 -- # xargs 00:17:38.067 07:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.067 07:27:00 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:38.067 07:27:00 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:17:38.067 07:27:00 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:38.067 07:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.067 07:27:00 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:38.067 07:27:00 -- common/autotest_common.sh@10 -- # set +x 00:17:38.067 07:27:00 -- host/discovery.sh@63 -- # sort -n 00:17:38.067 07:27:00 -- host/discovery.sh@63 -- # xargs 00:17:38.067 07:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.067 07:27:00 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:17:38.067 07:27:00 -- host/discovery.sh@131 -- # get_notification_count 00:17:38.067 07:27:00 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:38.067 07:27:00 -- host/discovery.sh@74 -- # jq '. | length' 00:17:38.067 07:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.067 07:27:00 -- common/autotest_common.sh@10 -- # set +x 00:17:38.067 07:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.067 07:27:00 -- host/discovery.sh@74 -- # notification_count=0 00:17:38.067 07:27:00 -- host/discovery.sh@75 -- # notify_id=2 00:17:38.067 07:27:00 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:17:38.067 07:27:00 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:38.067 07:27:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.067 07:27:00 -- common/autotest_common.sh@10 -- # set +x 00:17:38.067 07:27:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.067 07:27:00 -- host/discovery.sh@135 -- # sleep 1 00:17:39.004 07:27:01 -- host/discovery.sh@136 -- # get_subsystem_names 00:17:39.004 07:27:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:39.004 07:27:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.004 07:27:01 -- common/autotest_common.sh@10 -- # set +x 00:17:39.004 07:27:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:39.004 07:27:01 -- host/discovery.sh@59 -- # sort 00:17:39.004 07:27:01 -- host/discovery.sh@59 -- # xargs 00:17:39.004 07:27:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.004 07:27:01 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:17:39.004 07:27:01 -- host/discovery.sh@137 -- # get_bdev_list 00:17:39.004 07:27:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:39.004 07:27:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:39.004 07:27:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.004 07:27:01 -- host/discovery.sh@55 -- # sort 00:17:39.004 07:27:01 -- common/autotest_common.sh@10 -- # set +x 00:17:39.004 07:27:01 -- host/discovery.sh@55 -- # xargs 00:17:39.004 07:27:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.262 07:27:01 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:17:39.262 07:27:01 -- host/discovery.sh@138 -- # get_notification_count 00:17:39.262 07:27:01 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:39.262 07:27:01 -- host/discovery.sh@74 -- # jq '. | length' 00:17:39.262 07:27:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.262 07:27:01 -- common/autotest_common.sh@10 -- # set +x 00:17:39.262 07:27:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.262 07:27:01 -- host/discovery.sh@74 -- # notification_count=2 00:17:39.262 07:27:01 -- host/discovery.sh@75 -- # notify_id=4 00:17:39.262 07:27:01 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:17:39.262 07:27:01 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:39.262 07:27:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.262 07:27:01 -- common/autotest_common.sh@10 -- # set +x 00:17:40.199 [2024-11-28 07:27:02.365600] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:40.199 [2024-11-28 07:27:02.365648] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:40.199 [2024-11-28 07:27:02.365667] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:40.199 [2024-11-28 07:27:02.371645] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:40.199 [2024-11-28 07:27:02.431432] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:40.199 [2024-11-28 07:27:02.431488] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:40.199 07:27:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.199 07:27:02 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:40.199 07:27:02 -- common/autotest_common.sh@650 -- # local es=0 00:17:40.199 07:27:02 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:40.199 07:27:02 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:40.199 07:27:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.199 07:27:02 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:40.199 07:27:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.199 07:27:02 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:40.199 07:27:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.199 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.199 request: 00:17:40.199 { 00:17:40.199 "name": "nvme", 00:17:40.199 "trtype": "tcp", 00:17:40.199 "traddr": "10.0.0.2", 00:17:40.199 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:40.199 "adrfam": "ipv4", 00:17:40.199 "trsvcid": "8009", 00:17:40.199 "wait_for_attach": true, 00:17:40.199 "method": "bdev_nvme_start_discovery", 00:17:40.199 "req_id": 1 00:17:40.199 } 00:17:40.199 Got JSON-RPC error response 00:17:40.199 response: 00:17:40.199 { 00:17:40.199 "code": -17, 00:17:40.199 "message": "File exists" 00:17:40.199 } 00:17:40.199 07:27:02 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:40.199 07:27:02 -- common/autotest_common.sh@653 -- # es=1 00:17:40.199 07:27:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.199 07:27:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.199 07:27:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.199 07:27:02 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:17:40.199 07:27:02 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:40.199 07:27:02 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:40.199 07:27:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.199 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.199 07:27:02 -- host/discovery.sh@67 -- # sort 00:17:40.199 07:27:02 -- host/discovery.sh@67 -- # xargs 00:17:40.458 07:27:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.458 07:27:02 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:17:40.458 07:27:02 -- host/discovery.sh@147 -- # get_bdev_list 00:17:40.458 07:27:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:40.458 07:27:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.458 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.458 07:27:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:40.458 07:27:02 -- host/discovery.sh@55 -- # sort 00:17:40.458 07:27:02 -- host/discovery.sh@55 -- # xargs 00:17:40.458 07:27:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.458 07:27:02 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:40.458 07:27:02 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:40.458 07:27:02 -- common/autotest_common.sh@650 -- # local es=0 00:17:40.458 07:27:02 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:40.458 07:27:02 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:40.458 07:27:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.458 07:27:02 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:40.458 07:27:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.458 07:27:02 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:40.458 07:27:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.458 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.458 request: 00:17:40.458 { 00:17:40.458 "name": "nvme_second", 00:17:40.458 "trtype": "tcp", 00:17:40.458 "traddr": "10.0.0.2", 00:17:40.458 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:40.458 "adrfam": "ipv4", 00:17:40.458 "trsvcid": "8009", 00:17:40.458 "wait_for_attach": true, 00:17:40.458 "method": "bdev_nvme_start_discovery", 00:17:40.458 "req_id": 1 00:17:40.458 } 00:17:40.458 Got JSON-RPC error response 00:17:40.458 response: 00:17:40.458 { 00:17:40.458 "code": -17, 00:17:40.458 "message": "File exists" 00:17:40.458 } 00:17:40.458 07:27:02 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:40.458 07:27:02 -- common/autotest_common.sh@653 -- # es=1 00:17:40.458 07:27:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:40.458 07:27:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:40.458 07:27:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:40.458 07:27:02 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:17:40.458 07:27:02 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:40.458 07:27:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.458 07:27:02 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:40.458 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.458 07:27:02 -- host/discovery.sh@67 -- # sort 00:17:40.458 07:27:02 -- host/discovery.sh@67 -- # xargs 00:17:40.458 07:27:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.458 07:27:02 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:17:40.458 07:27:02 -- host/discovery.sh@153 -- # get_bdev_list 00:17:40.458 07:27:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:40.458 07:27:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:40.458 07:27:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.458 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:17:40.458 07:27:02 -- host/discovery.sh@55 -- # sort 00:17:40.458 07:27:02 -- host/discovery.sh@55 -- # xargs 00:17:40.458 07:27:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.458 07:27:02 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:40.458 07:27:02 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:40.458 07:27:02 -- common/autotest_common.sh@650 -- # local es=0 00:17:40.458 07:27:02 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:40.458 07:27:02 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:40.458 07:27:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.458 07:27:02 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:40.458 07:27:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:40.458 07:27:02 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:40.458 07:27:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.458 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:17:41.834 [2024-11-28 07:27:03.713038] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:41.834 [2024-11-28 07:27:03.713180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:41.834 [2024-11-28 07:27:03.713225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:41.834 [2024-11-28 07:27:03.713241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19705c0 with addr=10.0.0.2, port=8010 00:17:41.834 [2024-11-28 07:27:03.713270] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:41.834 [2024-11-28 07:27:03.713281] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:41.834 [2024-11-28 07:27:03.713291] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:42.771 [2024-11-28 07:27:04.713032] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:42.771 [2024-11-28 07:27:04.713144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:42.771 [2024-11-28 07:27:04.713186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:42.771 [2024-11-28 07:27:04.713202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1932bc0 with addr=10.0.0.2, port=8010 00:17:42.771 [2024-11-28 07:27:04.713229] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:42.771 [2024-11-28 07:27:04.713240] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:42.771 [2024-11-28 07:27:04.713250] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:43.709 [2024-11-28 07:27:05.712846] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:43.709 request: 00:17:43.709 { 00:17:43.709 "name": "nvme_second", 00:17:43.709 "trtype": "tcp", 00:17:43.709 "traddr": "10.0.0.2", 00:17:43.709 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:43.709 "adrfam": "ipv4", 00:17:43.709 "trsvcid": "8010", 00:17:43.709 "attach_timeout_ms": 3000, 00:17:43.709 "method": "bdev_nvme_start_discovery", 00:17:43.709 "req_id": 1 00:17:43.709 } 00:17:43.709 Got JSON-RPC error response 00:17:43.710 response: 00:17:43.710 { 00:17:43.710 "code": -110, 00:17:43.710 "message": "Connection timed out" 00:17:43.710 } 00:17:43.710 07:27:05 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:43.710 07:27:05 -- common/autotest_common.sh@653 -- # es=1 00:17:43.710 07:27:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.710 07:27:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.710 07:27:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.710 07:27:05 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:17:43.710 07:27:05 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:43.710 07:27:05 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:43.710 07:27:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.710 07:27:05 -- common/autotest_common.sh@10 -- # set +x 00:17:43.710 07:27:05 -- host/discovery.sh@67 -- # sort 00:17:43.710 07:27:05 -- host/discovery.sh@67 -- # xargs 00:17:43.710 07:27:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.710 07:27:05 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:17:43.710 07:27:05 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:17:43.710 07:27:05 -- host/discovery.sh@162 -- # kill 83140 00:17:43.710 07:27:05 -- host/discovery.sh@163 -- # nvmftestfini 00:17:43.710 07:27:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:43.710 07:27:05 -- nvmf/common.sh@116 -- # sync 00:17:43.710 07:27:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:43.710 07:27:05 -- nvmf/common.sh@119 -- # set +e 00:17:43.710 07:27:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:43.710 07:27:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:43.710 rmmod nvme_tcp 00:17:43.710 rmmod nvme_fabrics 00:17:43.710 rmmod nvme_keyring 00:17:43.710 07:27:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:43.710 07:27:05 -- nvmf/common.sh@123 -- # set -e 00:17:43.710 07:27:05 -- nvmf/common.sh@124 -- # return 0 00:17:43.710 07:27:05 -- nvmf/common.sh@477 -- # '[' -n 83109 ']' 00:17:43.710 07:27:05 -- nvmf/common.sh@478 -- # killprocess 83109 00:17:43.710 07:27:05 -- common/autotest_common.sh@936 -- # '[' -z 83109 ']' 00:17:43.710 07:27:05 -- common/autotest_common.sh@940 -- # kill -0 83109 00:17:43.710 07:27:05 -- common/autotest_common.sh@941 -- # uname 00:17:43.710 07:27:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.710 07:27:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83109 00:17:43.710 07:27:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:43.710 07:27:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:43.710 killing process with pid 83109 00:17:43.710 07:27:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83109' 00:17:43.710 07:27:05 -- common/autotest_common.sh@955 -- # kill 83109 00:17:43.710 07:27:05 -- common/autotest_common.sh@960 -- # wait 83109 00:17:44.278 07:27:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:44.278 07:27:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:44.278 07:27:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:44.278 07:27:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.278 07:27:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:44.278 07:27:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.278 07:27:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.278 07:27:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.278 07:27:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:44.278 00:17:44.278 real 0m14.116s 00:17:44.278 user 0m26.836s 00:17:44.278 sys 0m2.403s 00:17:44.278 07:27:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:44.278 07:27:06 -- common/autotest_common.sh@10 -- # set +x 00:17:44.278 ************************************ 00:17:44.278 END TEST nvmf_discovery 00:17:44.278 ************************************ 00:17:44.278 07:27:06 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:44.278 07:27:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:44.278 07:27:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:44.278 07:27:06 -- common/autotest_common.sh@10 -- # set +x 00:17:44.278 ************************************ 00:17:44.278 START TEST nvmf_discovery_remove_ifc 00:17:44.279 ************************************ 00:17:44.279 07:27:06 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:44.279 * Looking for test storage... 00:17:44.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:44.279 07:27:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:44.279 07:27:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:44.279 07:27:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:44.538 07:27:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:44.538 07:27:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:44.538 07:27:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:44.538 07:27:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:44.538 07:27:06 -- scripts/common.sh@335 -- # IFS=.-: 00:17:44.538 07:27:06 -- scripts/common.sh@335 -- # read -ra ver1 00:17:44.538 07:27:06 -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.538 07:27:06 -- scripts/common.sh@336 -- # read -ra ver2 00:17:44.538 07:27:06 -- scripts/common.sh@337 -- # local 'op=<' 00:17:44.538 07:27:06 -- scripts/common.sh@339 -- # ver1_l=2 00:17:44.538 07:27:06 -- scripts/common.sh@340 -- # ver2_l=1 00:17:44.538 07:27:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:44.538 07:27:06 -- scripts/common.sh@343 -- # case "$op" in 00:17:44.538 07:27:06 -- scripts/common.sh@344 -- # : 1 00:17:44.538 07:27:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:44.538 07:27:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.538 07:27:06 -- scripts/common.sh@364 -- # decimal 1 00:17:44.538 07:27:06 -- scripts/common.sh@352 -- # local d=1 00:17:44.538 07:27:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.538 07:27:06 -- scripts/common.sh@354 -- # echo 1 00:17:44.538 07:27:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:44.538 07:27:06 -- scripts/common.sh@365 -- # decimal 2 00:17:44.538 07:27:06 -- scripts/common.sh@352 -- # local d=2 00:17:44.538 07:27:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.538 07:27:06 -- scripts/common.sh@354 -- # echo 2 00:17:44.538 07:27:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:44.539 07:27:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:44.539 07:27:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:44.539 07:27:06 -- scripts/common.sh@367 -- # return 0 00:17:44.539 07:27:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.539 07:27:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:44.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.539 --rc genhtml_branch_coverage=1 00:17:44.539 --rc genhtml_function_coverage=1 00:17:44.539 --rc genhtml_legend=1 00:17:44.539 --rc geninfo_all_blocks=1 00:17:44.539 --rc geninfo_unexecuted_blocks=1 00:17:44.539 00:17:44.539 ' 00:17:44.539 07:27:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:44.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.539 --rc genhtml_branch_coverage=1 00:17:44.539 --rc genhtml_function_coverage=1 00:17:44.539 --rc genhtml_legend=1 00:17:44.539 --rc geninfo_all_blocks=1 00:17:44.539 --rc geninfo_unexecuted_blocks=1 00:17:44.539 00:17:44.539 ' 00:17:44.539 07:27:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:44.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.539 --rc genhtml_branch_coverage=1 00:17:44.539 --rc genhtml_function_coverage=1 00:17:44.539 --rc genhtml_legend=1 00:17:44.539 --rc geninfo_all_blocks=1 00:17:44.539 --rc geninfo_unexecuted_blocks=1 00:17:44.539 00:17:44.539 ' 00:17:44.539 07:27:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:44.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.539 --rc genhtml_branch_coverage=1 00:17:44.539 --rc genhtml_function_coverage=1 00:17:44.539 --rc genhtml_legend=1 00:17:44.539 --rc geninfo_all_blocks=1 00:17:44.539 --rc geninfo_unexecuted_blocks=1 00:17:44.539 00:17:44.539 ' 00:17:44.539 07:27:06 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:44.539 07:27:06 -- nvmf/common.sh@7 -- # uname -s 00:17:44.539 07:27:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.539 07:27:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.539 07:27:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.539 07:27:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.539 07:27:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.539 07:27:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.539 07:27:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.539 07:27:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.539 07:27:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.539 07:27:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.539 07:27:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:17:44.539 07:27:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:17:44.539 07:27:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.539 07:27:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.539 07:27:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:44.539 07:27:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:44.539 07:27:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.539 07:27:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.539 07:27:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.539 07:27:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.539 07:27:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.539 07:27:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.539 07:27:06 -- paths/export.sh@5 -- # export PATH 00:17:44.539 07:27:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.539 07:27:06 -- nvmf/common.sh@46 -- # : 0 00:17:44.539 07:27:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:44.539 07:27:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:44.539 07:27:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:44.539 07:27:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.539 07:27:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.539 07:27:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:44.539 07:27:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:44.539 07:27:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:44.539 07:27:06 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:44.539 07:27:06 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:44.539 07:27:06 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:44.539 07:27:06 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:44.539 07:27:06 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:44.539 07:27:06 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:44.539 07:27:06 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:44.539 07:27:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:44.539 07:27:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.539 07:27:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:44.539 07:27:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:44.539 07:27:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:44.539 07:27:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.539 07:27:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.539 07:27:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.539 07:27:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:44.539 07:27:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:44.539 07:27:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:44.539 07:27:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:44.539 07:27:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:44.539 07:27:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:44.539 07:27:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.539 07:27:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.539 07:27:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:44.539 07:27:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:44.539 07:27:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:44.539 07:27:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:44.539 07:27:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:44.539 07:27:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.539 07:27:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:44.539 07:27:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:44.539 07:27:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:44.539 07:27:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:44.539 07:27:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:44.539 07:27:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:44.539 Cannot find device "nvmf_tgt_br" 00:17:44.539 07:27:06 -- nvmf/common.sh@154 -- # true 00:17:44.539 07:27:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.539 Cannot find device "nvmf_tgt_br2" 00:17:44.539 07:27:06 -- nvmf/common.sh@155 -- # true 00:17:44.539 07:27:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:44.539 07:27:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:44.539 Cannot find device "nvmf_tgt_br" 00:17:44.539 07:27:06 -- nvmf/common.sh@157 -- # true 00:17:44.539 07:27:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:44.539 Cannot find device "nvmf_tgt_br2" 00:17:44.539 07:27:06 -- nvmf/common.sh@158 -- # true 00:17:44.539 07:27:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:44.539 07:27:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:44.539 07:27:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.539 07:27:06 -- nvmf/common.sh@161 -- # true 00:17:44.539 07:27:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.539 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.539 07:27:06 -- nvmf/common.sh@162 -- # true 00:17:44.539 07:27:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:44.539 07:27:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:44.539 07:27:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:44.539 07:27:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:44.539 07:27:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:44.539 07:27:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:44.799 07:27:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:44.799 07:27:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:44.799 07:27:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:44.799 07:27:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:44.799 07:27:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:44.799 07:27:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:44.799 07:27:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:44.799 07:27:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:44.799 07:27:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:44.799 07:27:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:44.799 07:27:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:44.799 07:27:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:44.799 07:27:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:44.799 07:27:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:44.799 07:27:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:44.799 07:27:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:44.799 07:27:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:44.799 07:27:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:44.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:17:44.799 00:17:44.799 --- 10.0.0.2 ping statistics --- 00:17:44.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.799 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:44.799 07:27:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:44.799 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:44.799 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:17:44.799 00:17:44.799 --- 10.0.0.3 ping statistics --- 00:17:44.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.799 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:44.799 07:27:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:44.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:17:44.799 00:17:44.799 --- 10.0.0.1 ping statistics --- 00:17:44.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.799 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:44.799 07:27:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.799 07:27:06 -- nvmf/common.sh@421 -- # return 0 00:17:44.799 07:27:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:44.799 07:27:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.799 07:27:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:44.799 07:27:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:44.799 07:27:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.799 07:27:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:44.799 07:27:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:44.799 07:27:06 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:44.799 07:27:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:44.799 07:27:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:44.799 07:27:06 -- common/autotest_common.sh@10 -- # set +x 00:17:44.799 07:27:06 -- nvmf/common.sh@469 -- # nvmfpid=83647 00:17:44.799 07:27:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:44.799 07:27:06 -- nvmf/common.sh@470 -- # waitforlisten 83647 00:17:44.799 07:27:06 -- common/autotest_common.sh@829 -- # '[' -z 83647 ']' 00:17:44.799 07:27:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.799 07:27:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.799 07:27:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.799 07:27:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.799 07:27:06 -- common/autotest_common.sh@10 -- # set +x 00:17:44.799 [2024-11-28 07:27:07.039942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:44.799 [2024-11-28 07:27:07.040076] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.059 [2024-11-28 07:27:07.184266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.059 [2024-11-28 07:27:07.289818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:45.059 [2024-11-28 07:27:07.290026] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.059 [2024-11-28 07:27:07.290043] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.059 [2024-11-28 07:27:07.290056] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.059 [2024-11-28 07:27:07.290097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.998 07:27:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:45.998 07:27:08 -- common/autotest_common.sh@862 -- # return 0 00:17:45.998 07:27:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:45.998 07:27:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:45.998 07:27:08 -- common/autotest_common.sh@10 -- # set +x 00:17:45.998 07:27:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.998 07:27:08 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:45.998 07:27:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.998 07:27:08 -- common/autotest_common.sh@10 -- # set +x 00:17:45.998 [2024-11-28 07:27:08.106975] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:45.998 [2024-11-28 07:27:08.115123] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:45.998 null0 00:17:45.998 [2024-11-28 07:27:08.147025] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.998 07:27:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.998 07:27:08 -- host/discovery_remove_ifc.sh@59 -- # hostpid=83679 00:17:45.998 07:27:08 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83679 /tmp/host.sock 00:17:45.998 07:27:08 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:45.998 07:27:08 -- common/autotest_common.sh@829 -- # '[' -z 83679 ']' 00:17:45.998 07:27:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:45.998 07:27:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.998 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:45.998 07:27:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:45.998 07:27:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.998 07:27:08 -- common/autotest_common.sh@10 -- # set +x 00:17:45.998 [2024-11-28 07:27:08.226524] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:45.998 [2024-11-28 07:27:08.226638] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83679 ] 00:17:46.257 [2024-11-28 07:27:08.369196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.257 [2024-11-28 07:27:08.478633] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:46.257 [2024-11-28 07:27:08.478843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.221 07:27:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.221 07:27:09 -- common/autotest_common.sh@862 -- # return 0 00:17:47.221 07:27:09 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:47.221 07:27:09 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:47.221 07:27:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.221 07:27:09 -- common/autotest_common.sh@10 -- # set +x 00:17:47.221 07:27:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.221 07:27:09 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:47.221 07:27:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.221 07:27:09 -- common/autotest_common.sh@10 -- # set +x 00:17:47.221 07:27:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.221 07:27:09 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:47.221 07:27:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.221 07:27:09 -- common/autotest_common.sh@10 -- # set +x 00:17:48.158 [2024-11-28 07:27:10.380103] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:48.158 [2024-11-28 07:27:10.380169] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:48.158 [2024-11-28 07:27:10.380189] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:48.158 [2024-11-28 07:27:10.386151] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:48.430 [2024-11-28 07:27:10.442650] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:48.430 [2024-11-28 07:27:10.442746] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:48.430 [2024-11-28 07:27:10.442775] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:48.430 [2024-11-28 07:27:10.442793] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:48.430 [2024-11-28 07:27:10.442823] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:48.430 07:27:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:48.430 [2024-11-28 07:27:10.448808] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10192c0 was disconnected and freed. delete nvme_qpair. 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:48.430 07:27:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.430 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:48.430 07:27:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:48.430 07:27:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.430 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:48.430 07:27:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:48.430 07:27:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:49.394 07:27:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:49.394 07:27:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:49.394 07:27:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:49.394 07:27:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.394 07:27:11 -- common/autotest_common.sh@10 -- # set +x 00:17:49.394 07:27:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:49.394 07:27:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:49.394 07:27:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.394 07:27:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:49.394 07:27:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:50.773 07:27:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:50.773 07:27:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:50.773 07:27:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:50.773 07:27:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.773 07:27:12 -- common/autotest_common.sh@10 -- # set +x 00:17:50.773 07:27:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:50.773 07:27:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:50.773 07:27:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.773 07:27:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:50.773 07:27:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:51.709 07:27:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:51.709 07:27:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:51.709 07:27:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.709 07:27:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:51.709 07:27:13 -- common/autotest_common.sh@10 -- # set +x 00:17:51.709 07:27:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:51.709 07:27:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:51.709 07:27:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.709 07:27:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:51.709 07:27:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:52.645 07:27:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:52.646 07:27:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:52.646 07:27:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:52.646 07:27:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.646 07:27:14 -- common/autotest_common.sh@10 -- # set +x 00:17:52.646 07:27:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:52.646 07:27:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:52.646 07:27:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.646 07:27:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:52.646 07:27:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:53.582 07:27:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:53.582 07:27:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:53.582 07:27:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:53.582 07:27:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.582 07:27:15 -- common/autotest_common.sh@10 -- # set +x 00:17:53.582 07:27:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:53.582 07:27:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:53.582 07:27:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.841 07:27:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:53.841 07:27:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:53.841 [2024-11-28 07:27:15.884901] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:53.841 [2024-11-28 07:27:15.884985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.841 [2024-11-28 07:27:15.885000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.841 [2024-11-28 07:27:15.885014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.841 [2024-11-28 07:27:15.885023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.841 [2024-11-28 07:27:15.885032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.841 [2024-11-28 07:27:15.885041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.841 [2024-11-28 07:27:15.885054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.841 [2024-11-28 07:27:15.885062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.841 [2024-11-28 07:27:15.885072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:53.841 [2024-11-28 07:27:15.885080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:53.841 [2024-11-28 07:27:15.885089] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdd6c0 is same with the state(5) to be set 00:17:53.841 [2024-11-28 07:27:15.894895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfdd6c0 (9): Bad file descriptor 00:17:53.841 [2024-11-28 07:27:15.904917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:54.778 07:27:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:54.778 07:27:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:54.778 07:27:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.778 07:27:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:54.778 07:27:16 -- common/autotest_common.sh@10 -- # set +x 00:17:54.778 07:27:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:54.778 07:27:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:54.778 [2024-11-28 07:27:16.929411] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:55.715 [2024-11-28 07:27:17.953441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:57.093 [2024-11-28 07:27:18.977444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:57.093 [2024-11-28 07:27:18.977585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfdd6c0 with addr=10.0.0.2, port=4420 00:17:57.093 [2024-11-28 07:27:18.977619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdd6c0 is same with the state(5) to be set 00:17:57.093 [2024-11-28 07:27:18.977676] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:57.093 [2024-11-28 07:27:18.977696] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:57.093 [2024-11-28 07:27:18.977711] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:57.093 [2024-11-28 07:27:18.977729] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:57.093 [2024-11-28 07:27:18.978540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfdd6c0 (9): Bad file descriptor 00:17:57.093 [2024-11-28 07:27:18.978613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:57.093 [2024-11-28 07:27:18.978667] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:57.093 [2024-11-28 07:27:18.978735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.093 [2024-11-28 07:27:18.978764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.093 [2024-11-28 07:27:18.978790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.093 [2024-11-28 07:27:18.978820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.093 [2024-11-28 07:27:18.978854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.093 [2024-11-28 07:27:18.978873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.093 [2024-11-28 07:27:18.978894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.093 [2024-11-28 07:27:18.978913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.093 [2024-11-28 07:27:18.978934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.093 [2024-11-28 07:27:18.978954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.093 [2024-11-28 07:27:18.978973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:57.093 [2024-11-28 07:27:18.979032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfddad0 (9): Bad file descriptor 00:17:57.093 [2024-11-28 07:27:18.980032] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:57.093 [2024-11-28 07:27:18.980100] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:57.093 07:27:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.093 07:27:18 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:57.093 07:27:18 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:58.030 07:27:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:58.030 07:27:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:58.030 07:27:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:58.030 07:27:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.030 07:27:20 -- common/autotest_common.sh@10 -- # set +x 00:17:58.030 07:27:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:58.030 07:27:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:58.031 07:27:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.031 07:27:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:58.031 07:27:20 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:58.031 07:27:20 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.031 07:27:20 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:58.031 07:27:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:58.031 07:27:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:58.031 07:27:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.031 07:27:20 -- common/autotest_common.sh@10 -- # set +x 00:17:58.031 07:27:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:58.031 07:27:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:58.031 07:27:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:58.031 07:27:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.031 07:27:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:58.031 07:27:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:58.968 [2024-11-28 07:27:20.984763] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:58.968 [2024-11-28 07:27:20.984802] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:58.968 [2024-11-28 07:27:20.984820] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:58.968 [2024-11-28 07:27:20.990794] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:58.968 [2024-11-28 07:27:21.046250] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:58.968 [2024-11-28 07:27:21.046301] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:58.968 [2024-11-28 07:27:21.046336] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:58.968 [2024-11-28 07:27:21.046354] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:58.968 [2024-11-28 07:27:21.046363] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:58.968 [2024-11-28 07:27:21.053540] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfea930 was disconnected and freed. delete nvme_qpair. 00:17:58.968 07:27:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:58.968 07:27:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:58.968 07:27:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.968 07:27:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:58.968 07:27:21 -- common/autotest_common.sh@10 -- # set +x 00:17:58.968 07:27:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:58.968 07:27:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:58.968 07:27:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.968 07:27:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:58.968 07:27:21 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:58.968 07:27:21 -- host/discovery_remove_ifc.sh@90 -- # killprocess 83679 00:17:58.968 07:27:21 -- common/autotest_common.sh@936 -- # '[' -z 83679 ']' 00:17:58.968 07:27:21 -- common/autotest_common.sh@940 -- # kill -0 83679 00:17:58.968 07:27:21 -- common/autotest_common.sh@941 -- # uname 00:17:58.968 07:27:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:58.968 07:27:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83679 00:17:58.968 killing process with pid 83679 00:17:58.968 07:27:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:58.968 07:27:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:58.968 07:27:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83679' 00:17:58.968 07:27:21 -- common/autotest_common.sh@955 -- # kill 83679 00:17:58.968 07:27:21 -- common/autotest_common.sh@960 -- # wait 83679 00:17:59.238 07:27:21 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:59.238 07:27:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:59.238 07:27:21 -- nvmf/common.sh@116 -- # sync 00:17:59.500 07:27:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:59.500 07:27:21 -- nvmf/common.sh@119 -- # set +e 00:17:59.500 07:27:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:59.500 07:27:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:59.500 rmmod nvme_tcp 00:17:59.500 rmmod nvme_fabrics 00:17:59.500 rmmod nvme_keyring 00:17:59.500 07:27:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:59.500 07:27:21 -- nvmf/common.sh@123 -- # set -e 00:17:59.500 07:27:21 -- nvmf/common.sh@124 -- # return 0 00:17:59.500 07:27:21 -- nvmf/common.sh@477 -- # '[' -n 83647 ']' 00:17:59.500 07:27:21 -- nvmf/common.sh@478 -- # killprocess 83647 00:17:59.500 07:27:21 -- common/autotest_common.sh@936 -- # '[' -z 83647 ']' 00:17:59.500 07:27:21 -- common/autotest_common.sh@940 -- # kill -0 83647 00:17:59.500 07:27:21 -- common/autotest_common.sh@941 -- # uname 00:17:59.500 07:27:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.500 07:27:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83647 00:17:59.500 killing process with pid 83647 00:17:59.500 07:27:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:59.500 07:27:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:59.500 07:27:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83647' 00:17:59.500 07:27:21 -- common/autotest_common.sh@955 -- # kill 83647 00:17:59.500 07:27:21 -- common/autotest_common.sh@960 -- # wait 83647 00:17:59.760 07:27:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:59.760 07:27:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:59.760 07:27:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:59.760 07:27:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.760 07:27:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:59.760 07:27:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.760 07:27:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.760 07:27:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.760 07:27:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:59.760 00:17:59.760 real 0m15.586s 00:17:59.760 user 0m24.862s 00:17:59.760 sys 0m2.662s 00:17:59.760 07:27:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:59.760 07:27:21 -- common/autotest_common.sh@10 -- # set +x 00:17:59.760 ************************************ 00:17:59.760 END TEST nvmf_discovery_remove_ifc 00:17:59.760 ************************************ 00:17:59.760 07:27:21 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:17:59.760 07:27:21 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:59.760 07:27:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:59.760 07:27:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:59.760 07:27:21 -- common/autotest_common.sh@10 -- # set +x 00:17:59.760 ************************************ 00:17:59.760 START TEST nvmf_digest 00:17:59.760 ************************************ 00:17:59.760 07:27:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:00.020 * Looking for test storage... 00:18:00.020 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:00.020 07:27:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:00.020 07:27:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:00.020 07:27:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:00.020 07:27:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:00.020 07:27:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:00.020 07:27:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:00.020 07:27:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:00.020 07:27:22 -- scripts/common.sh@335 -- # IFS=.-: 00:18:00.020 07:27:22 -- scripts/common.sh@335 -- # read -ra ver1 00:18:00.020 07:27:22 -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.020 07:27:22 -- scripts/common.sh@336 -- # read -ra ver2 00:18:00.020 07:27:22 -- scripts/common.sh@337 -- # local 'op=<' 00:18:00.020 07:27:22 -- scripts/common.sh@339 -- # ver1_l=2 00:18:00.020 07:27:22 -- scripts/common.sh@340 -- # ver2_l=1 00:18:00.020 07:27:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:00.020 07:27:22 -- scripts/common.sh@343 -- # case "$op" in 00:18:00.020 07:27:22 -- scripts/common.sh@344 -- # : 1 00:18:00.020 07:27:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:00.020 07:27:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.020 07:27:22 -- scripts/common.sh@364 -- # decimal 1 00:18:00.020 07:27:22 -- scripts/common.sh@352 -- # local d=1 00:18:00.020 07:27:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.020 07:27:22 -- scripts/common.sh@354 -- # echo 1 00:18:00.020 07:27:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:00.020 07:27:22 -- scripts/common.sh@365 -- # decimal 2 00:18:00.020 07:27:22 -- scripts/common.sh@352 -- # local d=2 00:18:00.020 07:27:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.020 07:27:22 -- scripts/common.sh@354 -- # echo 2 00:18:00.020 07:27:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:00.020 07:27:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:00.020 07:27:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:00.020 07:27:22 -- scripts/common.sh@367 -- # return 0 00:18:00.020 07:27:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.020 07:27:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:00.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.020 --rc genhtml_branch_coverage=1 00:18:00.020 --rc genhtml_function_coverage=1 00:18:00.020 --rc genhtml_legend=1 00:18:00.020 --rc geninfo_all_blocks=1 00:18:00.020 --rc geninfo_unexecuted_blocks=1 00:18:00.020 00:18:00.020 ' 00:18:00.020 07:27:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:00.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.020 --rc genhtml_branch_coverage=1 00:18:00.020 --rc genhtml_function_coverage=1 00:18:00.020 --rc genhtml_legend=1 00:18:00.020 --rc geninfo_all_blocks=1 00:18:00.020 --rc geninfo_unexecuted_blocks=1 00:18:00.020 00:18:00.020 ' 00:18:00.020 07:27:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:00.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.020 --rc genhtml_branch_coverage=1 00:18:00.020 --rc genhtml_function_coverage=1 00:18:00.020 --rc genhtml_legend=1 00:18:00.020 --rc geninfo_all_blocks=1 00:18:00.020 --rc geninfo_unexecuted_blocks=1 00:18:00.020 00:18:00.020 ' 00:18:00.020 07:27:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:00.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.020 --rc genhtml_branch_coverage=1 00:18:00.020 --rc genhtml_function_coverage=1 00:18:00.020 --rc genhtml_legend=1 00:18:00.020 --rc geninfo_all_blocks=1 00:18:00.020 --rc geninfo_unexecuted_blocks=1 00:18:00.020 00:18:00.020 ' 00:18:00.020 07:27:22 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:00.020 07:27:22 -- nvmf/common.sh@7 -- # uname -s 00:18:00.020 07:27:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.020 07:27:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.020 07:27:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.020 07:27:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.020 07:27:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.020 07:27:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.020 07:27:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.020 07:27:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.020 07:27:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.020 07:27:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.020 07:27:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:18:00.020 07:27:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:18:00.020 07:27:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.020 07:27:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.020 07:27:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:00.020 07:27:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:00.020 07:27:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.020 07:27:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.020 07:27:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.020 07:27:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.020 07:27:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.020 07:27:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.020 07:27:22 -- paths/export.sh@5 -- # export PATH 00:18:00.020 07:27:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.020 07:27:22 -- nvmf/common.sh@46 -- # : 0 00:18:00.020 07:27:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:00.020 07:27:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:00.020 07:27:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:00.020 07:27:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.020 07:27:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.020 07:27:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:00.020 07:27:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:00.020 07:27:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:00.020 07:27:22 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:00.020 07:27:22 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:00.020 07:27:22 -- host/digest.sh@16 -- # runtime=2 00:18:00.020 07:27:22 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:18:00.020 07:27:22 -- host/digest.sh@132 -- # nvmftestinit 00:18:00.020 07:27:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:00.020 07:27:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.020 07:27:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:00.020 07:27:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:00.020 07:27:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:00.020 07:27:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.020 07:27:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.020 07:27:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.020 07:27:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:00.020 07:27:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:00.020 07:27:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:00.020 07:27:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:00.020 07:27:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:00.020 07:27:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:00.020 07:27:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.020 07:27:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.020 07:27:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:00.020 07:27:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:00.020 07:27:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:00.020 07:27:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:00.020 07:27:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:00.020 07:27:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.020 07:27:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:00.020 07:27:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:00.021 07:27:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:00.021 07:27:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:00.021 07:27:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:00.021 07:27:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:00.021 Cannot find device "nvmf_tgt_br" 00:18:00.021 07:27:22 -- nvmf/common.sh@154 -- # true 00:18:00.021 07:27:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:00.021 Cannot find device "nvmf_tgt_br2" 00:18:00.021 07:27:22 -- nvmf/common.sh@155 -- # true 00:18:00.021 07:27:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:00.021 07:27:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:00.021 Cannot find device "nvmf_tgt_br" 00:18:00.021 07:27:22 -- nvmf/common.sh@157 -- # true 00:18:00.021 07:27:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:00.021 Cannot find device "nvmf_tgt_br2" 00:18:00.021 07:27:22 -- nvmf/common.sh@158 -- # true 00:18:00.021 07:27:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:00.280 07:27:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:00.280 07:27:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:00.280 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.280 07:27:22 -- nvmf/common.sh@161 -- # true 00:18:00.280 07:27:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:00.280 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:00.280 07:27:22 -- nvmf/common.sh@162 -- # true 00:18:00.280 07:27:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:00.280 07:27:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:00.280 07:27:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:00.280 07:27:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:00.280 07:27:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:00.281 07:27:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:00.281 07:27:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:00.281 07:27:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:00.281 07:27:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:00.281 07:27:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:00.281 07:27:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:00.281 07:27:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:00.281 07:27:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:00.281 07:27:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:00.281 07:27:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:00.281 07:27:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:00.281 07:27:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:00.281 07:27:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:00.281 07:27:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:00.281 07:27:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:00.281 07:27:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:00.281 07:27:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:00.281 07:27:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:00.281 07:27:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:00.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:18:00.281 00:18:00.281 --- 10.0.0.2 ping statistics --- 00:18:00.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.281 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:00.281 07:27:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:00.281 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:00.281 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:18:00.281 00:18:00.281 --- 10.0.0.3 ping statistics --- 00:18:00.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.281 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:00.281 07:27:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:00.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:00.281 00:18:00.281 --- 10.0.0.1 ping statistics --- 00:18:00.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.281 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:00.281 07:27:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.281 07:27:22 -- nvmf/common.sh@421 -- # return 0 00:18:00.281 07:27:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:00.281 07:27:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.281 07:27:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:00.281 07:27:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:00.281 07:27:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.281 07:27:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:00.281 07:27:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:00.540 07:27:22 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:00.540 07:27:22 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:18:00.540 07:27:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:00.540 07:27:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:00.540 07:27:22 -- common/autotest_common.sh@10 -- # set +x 00:18:00.540 ************************************ 00:18:00.540 START TEST nvmf_digest_clean 00:18:00.540 ************************************ 00:18:00.540 07:27:22 -- common/autotest_common.sh@1114 -- # run_digest 00:18:00.540 07:27:22 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:18:00.540 07:27:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:00.540 07:27:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:00.540 07:27:22 -- common/autotest_common.sh@10 -- # set +x 00:18:00.540 07:27:22 -- nvmf/common.sh@469 -- # nvmfpid=84105 00:18:00.540 07:27:22 -- nvmf/common.sh@470 -- # waitforlisten 84105 00:18:00.540 07:27:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:00.540 07:27:22 -- common/autotest_common.sh@829 -- # '[' -z 84105 ']' 00:18:00.540 07:27:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.540 07:27:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.540 07:27:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.540 07:27:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.540 07:27:22 -- common/autotest_common.sh@10 -- # set +x 00:18:00.540 [2024-11-28 07:27:22.638952] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:00.540 [2024-11-28 07:27:22.639060] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.540 [2024-11-28 07:27:22.781814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.818 [2024-11-28 07:27:22.904248] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:00.818 [2024-11-28 07:27:22.904444] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.818 [2024-11-28 07:27:22.904461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.818 [2024-11-28 07:27:22.904472] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.818 [2024-11-28 07:27:22.904514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.412 07:27:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.412 07:27:23 -- common/autotest_common.sh@862 -- # return 0 00:18:01.412 07:27:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:01.412 07:27:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:01.412 07:27:23 -- common/autotest_common.sh@10 -- # set +x 00:18:01.412 07:27:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.412 07:27:23 -- host/digest.sh@120 -- # common_target_config 00:18:01.412 07:27:23 -- host/digest.sh@43 -- # rpc_cmd 00:18:01.412 07:27:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.412 07:27:23 -- common/autotest_common.sh@10 -- # set +x 00:18:01.671 null0 00:18:01.671 [2024-11-28 07:27:23.767590] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.671 [2024-11-28 07:27:23.791718] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.671 07:27:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.671 07:27:23 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:18:01.671 07:27:23 -- host/digest.sh@77 -- # local rw bs qd 00:18:01.671 07:27:23 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:01.671 07:27:23 -- host/digest.sh@80 -- # rw=randread 00:18:01.671 07:27:23 -- host/digest.sh@80 -- # bs=4096 00:18:01.671 07:27:23 -- host/digest.sh@80 -- # qd=128 00:18:01.671 07:27:23 -- host/digest.sh@82 -- # bperfpid=84137 00:18:01.671 07:27:23 -- host/digest.sh@83 -- # waitforlisten 84137 /var/tmp/bperf.sock 00:18:01.671 07:27:23 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:01.671 07:27:23 -- common/autotest_common.sh@829 -- # '[' -z 84137 ']' 00:18:01.671 07:27:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:01.671 07:27:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:01.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:01.671 07:27:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:01.671 07:27:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:01.671 07:27:23 -- common/autotest_common.sh@10 -- # set +x 00:18:01.671 [2024-11-28 07:27:23.842392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:01.671 [2024-11-28 07:27:23.842491] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84137 ] 00:18:01.950 [2024-11-28 07:27:23.976959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.950 [2024-11-28 07:27:24.070755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.885 07:27:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.885 07:27:24 -- common/autotest_common.sh@862 -- # return 0 00:18:02.885 07:27:24 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:18:02.885 07:27:24 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:18:02.885 07:27:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:03.143 07:27:25 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.143 07:27:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:03.400 nvme0n1 00:18:03.400 07:27:25 -- host/digest.sh@91 -- # bperf_py perform_tests 00:18:03.400 07:27:25 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:03.400 Running I/O for 2 seconds... 00:18:05.937 00:18:05.937 Latency(us) 00:18:05.937 [2024-11-28T07:27:28.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.937 [2024-11-28T07:27:28.212Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:05.937 nvme0n1 : 2.01 17891.99 69.89 0.00 0.00 7149.19 6464.23 17515.99 00:18:05.937 [2024-11-28T07:27:28.212Z] =================================================================================================================== 00:18:05.937 [2024-11-28T07:27:28.212Z] Total : 17891.99 69.89 0.00 0.00 7149.19 6464.23 17515.99 00:18:05.937 0 00:18:05.937 07:27:27 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:18:05.937 07:27:27 -- host/digest.sh@92 -- # get_accel_stats 00:18:05.937 07:27:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:05.937 07:27:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:05.937 07:27:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:05.937 | select(.opcode=="crc32c") 00:18:05.937 | "\(.module_name) \(.executed)"' 00:18:05.937 07:27:27 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:18:05.937 07:27:27 -- host/digest.sh@93 -- # exp_module=software 00:18:05.937 07:27:27 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:18:05.937 07:27:27 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:05.937 07:27:27 -- host/digest.sh@97 -- # killprocess 84137 00:18:05.937 07:27:27 -- common/autotest_common.sh@936 -- # '[' -z 84137 ']' 00:18:05.937 07:27:27 -- common/autotest_common.sh@940 -- # kill -0 84137 00:18:05.937 07:27:27 -- common/autotest_common.sh@941 -- # uname 00:18:05.937 07:27:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:05.937 07:27:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84137 00:18:05.937 07:27:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:05.937 07:27:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:05.937 killing process with pid 84137 00:18:05.937 07:27:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84137' 00:18:05.937 Received shutdown signal, test time was about 2.000000 seconds 00:18:05.937 00:18:05.937 Latency(us) 00:18:05.937 [2024-11-28T07:27:28.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.937 [2024-11-28T07:27:28.212Z] =================================================================================================================== 00:18:05.937 [2024-11-28T07:27:28.212Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.937 07:27:28 -- common/autotest_common.sh@955 -- # kill 84137 00:18:05.937 07:27:28 -- common/autotest_common.sh@960 -- # wait 84137 00:18:06.196 07:27:28 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:18:06.196 07:27:28 -- host/digest.sh@77 -- # local rw bs qd 00:18:06.196 07:27:28 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:06.196 07:27:28 -- host/digest.sh@80 -- # rw=randread 00:18:06.196 07:27:28 -- host/digest.sh@80 -- # bs=131072 00:18:06.196 07:27:28 -- host/digest.sh@80 -- # qd=16 00:18:06.196 07:27:28 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:06.196 07:27:28 -- host/digest.sh@82 -- # bperfpid=84197 00:18:06.196 07:27:28 -- host/digest.sh@83 -- # waitforlisten 84197 /var/tmp/bperf.sock 00:18:06.196 07:27:28 -- common/autotest_common.sh@829 -- # '[' -z 84197 ']' 00:18:06.196 07:27:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:06.196 07:27:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:06.196 07:27:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:06.196 07:27:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.196 07:27:28 -- common/autotest_common.sh@10 -- # set +x 00:18:06.196 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:06.196 Zero copy mechanism will not be used. 00:18:06.196 [2024-11-28 07:27:28.297537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:06.196 [2024-11-28 07:27:28.297624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84197 ] 00:18:06.196 [2024-11-28 07:27:28.427366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.455 [2024-11-28 07:27:28.502879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.022 07:27:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.022 07:27:29 -- common/autotest_common.sh@862 -- # return 0 00:18:07.022 07:27:29 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:18:07.022 07:27:29 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:18:07.022 07:27:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:07.280 07:27:29 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.281 07:27:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.539 nvme0n1 00:18:07.798 07:27:29 -- host/digest.sh@91 -- # bperf_py perform_tests 00:18:07.798 07:27:29 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:07.798 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:07.798 Zero copy mechanism will not be used. 00:18:07.798 Running I/O for 2 seconds... 00:18:09.704 00:18:09.704 Latency(us) 00:18:09.704 [2024-11-28T07:27:31.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.704 [2024-11-28T07:27:31.979Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:09.704 nvme0n1 : 2.00 8592.96 1074.12 0.00 0.00 1859.31 1683.08 5987.61 00:18:09.704 [2024-11-28T07:27:31.979Z] =================================================================================================================== 00:18:09.704 [2024-11-28T07:27:31.979Z] Total : 8592.96 1074.12 0.00 0.00 1859.31 1683.08 5987.61 00:18:09.704 0 00:18:09.704 07:27:31 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:18:09.704 07:27:31 -- host/digest.sh@92 -- # get_accel_stats 00:18:09.704 07:27:31 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:09.704 07:27:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:09.704 07:27:31 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:09.704 | select(.opcode=="crc32c") 00:18:09.704 | "\(.module_name) \(.executed)"' 00:18:09.963 07:27:32 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:18:09.963 07:27:32 -- host/digest.sh@93 -- # exp_module=software 00:18:09.963 07:27:32 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:18:09.963 07:27:32 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:09.963 07:27:32 -- host/digest.sh@97 -- # killprocess 84197 00:18:09.963 07:27:32 -- common/autotest_common.sh@936 -- # '[' -z 84197 ']' 00:18:09.963 07:27:32 -- common/autotest_common.sh@940 -- # kill -0 84197 00:18:10.223 07:27:32 -- common/autotest_common.sh@941 -- # uname 00:18:10.223 07:27:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:10.223 07:27:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84197 00:18:10.223 07:27:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:10.223 07:27:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:10.223 killing process with pid 84197 00:18:10.223 07:27:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84197' 00:18:10.223 Received shutdown signal, test time was about 2.000000 seconds 00:18:10.223 00:18:10.223 Latency(us) 00:18:10.223 [2024-11-28T07:27:32.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.223 [2024-11-28T07:27:32.498Z] =================================================================================================================== 00:18:10.223 [2024-11-28T07:27:32.498Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.223 07:27:32 -- common/autotest_common.sh@955 -- # kill 84197 00:18:10.223 07:27:32 -- common/autotest_common.sh@960 -- # wait 84197 00:18:10.483 07:27:32 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:18:10.483 07:27:32 -- host/digest.sh@77 -- # local rw bs qd 00:18:10.483 07:27:32 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:10.483 07:27:32 -- host/digest.sh@80 -- # rw=randwrite 00:18:10.483 07:27:32 -- host/digest.sh@80 -- # bs=4096 00:18:10.483 07:27:32 -- host/digest.sh@80 -- # qd=128 00:18:10.483 07:27:32 -- host/digest.sh@82 -- # bperfpid=84258 00:18:10.483 07:27:32 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:10.483 07:27:32 -- host/digest.sh@83 -- # waitforlisten 84258 /var/tmp/bperf.sock 00:18:10.483 07:27:32 -- common/autotest_common.sh@829 -- # '[' -z 84258 ']' 00:18:10.483 07:27:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:10.483 07:27:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:10.483 07:27:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:10.483 07:27:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.483 07:27:32 -- common/autotest_common.sh@10 -- # set +x 00:18:10.483 [2024-11-28 07:27:32.609673] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:10.483 [2024-11-28 07:27:32.609776] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84258 ] 00:18:10.483 [2024-11-28 07:27:32.740303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.742 [2024-11-28 07:27:32.851059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.679 07:27:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.679 07:27:33 -- common/autotest_common.sh@862 -- # return 0 00:18:11.679 07:27:33 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:18:11.679 07:27:33 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:18:11.679 07:27:33 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:11.938 07:27:34 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.938 07:27:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:12.196 nvme0n1 00:18:12.196 07:27:34 -- host/digest.sh@91 -- # bperf_py perform_tests 00:18:12.196 07:27:34 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:12.196 Running I/O for 2 seconds... 00:18:14.785 00:18:14.785 Latency(us) 00:18:14.785 [2024-11-28T07:27:37.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.785 [2024-11-28T07:27:37.060Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:14.785 nvme0n1 : 2.01 18612.83 72.71 0.00 0.00 6870.61 6166.34 14834.97 00:18:14.785 [2024-11-28T07:27:37.060Z] =================================================================================================================== 00:18:14.785 [2024-11-28T07:27:37.060Z] Total : 18612.83 72.71 0.00 0.00 6870.61 6166.34 14834.97 00:18:14.785 0 00:18:14.785 07:27:36 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:18:14.785 07:27:36 -- host/digest.sh@92 -- # get_accel_stats 00:18:14.785 07:27:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:14.785 07:27:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:14.785 | select(.opcode=="crc32c") 00:18:14.785 | "\(.module_name) \(.executed)"' 00:18:14.785 07:27:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:14.785 07:27:36 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:18:14.785 07:27:36 -- host/digest.sh@93 -- # exp_module=software 00:18:14.785 07:27:36 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:18:14.785 07:27:36 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:14.785 07:27:36 -- host/digest.sh@97 -- # killprocess 84258 00:18:14.785 07:27:36 -- common/autotest_common.sh@936 -- # '[' -z 84258 ']' 00:18:14.785 07:27:36 -- common/autotest_common.sh@940 -- # kill -0 84258 00:18:14.785 07:27:36 -- common/autotest_common.sh@941 -- # uname 00:18:14.785 07:27:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:14.785 07:27:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84258 00:18:14.785 07:27:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:14.785 07:27:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:14.785 07:27:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84258' 00:18:14.785 killing process with pid 84258 00:18:14.785 07:27:36 -- common/autotest_common.sh@955 -- # kill 84258 00:18:14.785 Received shutdown signal, test time was about 2.000000 seconds 00:18:14.785 00:18:14.785 Latency(us) 00:18:14.785 [2024-11-28T07:27:37.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.785 [2024-11-28T07:27:37.060Z] =================================================================================================================== 00:18:14.785 [2024-11-28T07:27:37.060Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.785 07:27:36 -- common/autotest_common.sh@960 -- # wait 84258 00:18:14.785 07:27:37 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:18:14.785 07:27:37 -- host/digest.sh@77 -- # local rw bs qd 00:18:14.785 07:27:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:14.785 07:27:37 -- host/digest.sh@80 -- # rw=randwrite 00:18:14.785 07:27:37 -- host/digest.sh@80 -- # bs=131072 00:18:14.785 07:27:37 -- host/digest.sh@80 -- # qd=16 00:18:14.785 07:27:37 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:14.785 07:27:37 -- host/digest.sh@82 -- # bperfpid=84325 00:18:14.785 07:27:37 -- host/digest.sh@83 -- # waitforlisten 84325 /var/tmp/bperf.sock 00:18:14.785 07:27:37 -- common/autotest_common.sh@829 -- # '[' -z 84325 ']' 00:18:14.785 07:27:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:14.785 07:27:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:14.785 07:27:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:14.785 07:27:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.785 07:27:37 -- common/autotest_common.sh@10 -- # set +x 00:18:15.045 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:15.045 Zero copy mechanism will not be used. 00:18:15.045 [2024-11-28 07:27:37.080819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:15.045 [2024-11-28 07:27:37.080909] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84325 ] 00:18:15.045 [2024-11-28 07:27:37.212027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.304 [2024-11-28 07:27:37.334245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.871 07:27:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.871 07:27:38 -- common/autotest_common.sh@862 -- # return 0 00:18:15.871 07:27:38 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:18:15.871 07:27:38 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:18:15.872 07:27:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:16.440 07:27:38 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:16.440 07:27:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:16.440 nvme0n1 00:18:16.700 07:27:38 -- host/digest.sh@91 -- # bperf_py perform_tests 00:18:16.700 07:27:38 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:16.700 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:16.700 Zero copy mechanism will not be used. 00:18:16.700 Running I/O for 2 seconds... 00:18:18.605 00:18:18.605 Latency(us) 00:18:18.605 [2024-11-28T07:27:40.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.605 [2024-11-28T07:27:40.880Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:18.605 nvme0n1 : 2.00 7486.15 935.77 0.00 0.00 2132.69 1638.40 4170.47 00:18:18.605 [2024-11-28T07:27:40.880Z] =================================================================================================================== 00:18:18.605 [2024-11-28T07:27:40.880Z] Total : 7486.15 935.77 0.00 0.00 2132.69 1638.40 4170.47 00:18:18.605 0 00:18:18.605 07:27:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:18:18.605 07:27:40 -- host/digest.sh@92 -- # get_accel_stats 00:18:18.605 07:27:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:18.605 07:27:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:18.605 | select(.opcode=="crc32c") 00:18:18.605 | "\(.module_name) \(.executed)"' 00:18:18.605 07:27:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:18.864 07:27:41 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:18:18.864 07:27:41 -- host/digest.sh@93 -- # exp_module=software 00:18:18.864 07:27:41 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:18:18.864 07:27:41 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:18.864 07:27:41 -- host/digest.sh@97 -- # killprocess 84325 00:18:18.864 07:27:41 -- common/autotest_common.sh@936 -- # '[' -z 84325 ']' 00:18:18.864 07:27:41 -- common/autotest_common.sh@940 -- # kill -0 84325 00:18:18.864 07:27:41 -- common/autotest_common.sh@941 -- # uname 00:18:18.864 07:27:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.864 07:27:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84325 00:18:19.123 killing process with pid 84325 00:18:19.123 Received shutdown signal, test time was about 2.000000 seconds 00:18:19.123 00:18:19.123 Latency(us) 00:18:19.123 [2024-11-28T07:27:41.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.123 [2024-11-28T07:27:41.398Z] =================================================================================================================== 00:18:19.123 [2024-11-28T07:27:41.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.123 07:27:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:19.123 07:27:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:19.123 07:27:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84325' 00:18:19.123 07:27:41 -- common/autotest_common.sh@955 -- # kill 84325 00:18:19.123 07:27:41 -- common/autotest_common.sh@960 -- # wait 84325 00:18:19.382 07:27:41 -- host/digest.sh@126 -- # killprocess 84105 00:18:19.382 07:27:41 -- common/autotest_common.sh@936 -- # '[' -z 84105 ']' 00:18:19.382 07:27:41 -- common/autotest_common.sh@940 -- # kill -0 84105 00:18:19.382 07:27:41 -- common/autotest_common.sh@941 -- # uname 00:18:19.382 07:27:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:19.382 07:27:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84105 00:18:19.382 killing process with pid 84105 00:18:19.382 07:27:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:19.382 07:27:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:19.382 07:27:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84105' 00:18:19.382 07:27:41 -- common/autotest_common.sh@955 -- # kill 84105 00:18:19.382 07:27:41 -- common/autotest_common.sh@960 -- # wait 84105 00:18:19.641 ************************************ 00:18:19.641 END TEST nvmf_digest_clean 00:18:19.641 ************************************ 00:18:19.641 00:18:19.641 real 0m19.188s 00:18:19.641 user 0m36.894s 00:18:19.641 sys 0m4.952s 00:18:19.641 07:27:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:19.641 07:27:41 -- common/autotest_common.sh@10 -- # set +x 00:18:19.641 07:27:41 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:18:19.641 07:27:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:19.641 07:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:19.641 07:27:41 -- common/autotest_common.sh@10 -- # set +x 00:18:19.641 ************************************ 00:18:19.641 START TEST nvmf_digest_error 00:18:19.641 ************************************ 00:18:19.641 07:27:41 -- common/autotest_common.sh@1114 -- # run_digest_error 00:18:19.641 07:27:41 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:18:19.641 07:27:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:19.641 07:27:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.641 07:27:41 -- common/autotest_common.sh@10 -- # set +x 00:18:19.641 07:27:41 -- nvmf/common.sh@469 -- # nvmfpid=84408 00:18:19.641 07:27:41 -- nvmf/common.sh@470 -- # waitforlisten 84408 00:18:19.641 07:27:41 -- common/autotest_common.sh@829 -- # '[' -z 84408 ']' 00:18:19.641 07:27:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:19.641 07:27:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.641 07:27:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.641 07:27:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.641 07:27:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.641 07:27:41 -- common/autotest_common.sh@10 -- # set +x 00:18:19.641 [2024-11-28 07:27:41.879264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:19.641 [2024-11-28 07:27:41.879387] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.900 [2024-11-28 07:27:42.017564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.900 [2024-11-28 07:27:42.116414] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:19.900 [2024-11-28 07:27:42.116587] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.900 [2024-11-28 07:27:42.116600] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.900 [2024-11-28 07:27:42.116611] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.900 [2024-11-28 07:27:42.116664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.837 07:27:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.837 07:27:42 -- common/autotest_common.sh@862 -- # return 0 00:18:20.837 07:27:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:20.837 07:27:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.837 07:27:42 -- common/autotest_common.sh@10 -- # set +x 00:18:20.837 07:27:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.837 07:27:42 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:20.837 07:27:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.837 07:27:42 -- common/autotest_common.sh@10 -- # set +x 00:18:20.837 [2024-11-28 07:27:42.893246] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:20.837 07:27:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.837 07:27:42 -- host/digest.sh@104 -- # common_target_config 00:18:20.837 07:27:42 -- host/digest.sh@43 -- # rpc_cmd 00:18:20.837 07:27:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.837 07:27:42 -- common/autotest_common.sh@10 -- # set +x 00:18:20.837 null0 00:18:20.837 [2024-11-28 07:27:43.030777] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.837 [2024-11-28 07:27:43.054929] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.837 07:27:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.837 07:27:43 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:18:20.837 07:27:43 -- host/digest.sh@54 -- # local rw bs qd 00:18:20.837 07:27:43 -- host/digest.sh@56 -- # rw=randread 00:18:20.837 07:27:43 -- host/digest.sh@56 -- # bs=4096 00:18:20.837 07:27:43 -- host/digest.sh@56 -- # qd=128 00:18:20.837 07:27:43 -- host/digest.sh@58 -- # bperfpid=84446 00:18:20.837 07:27:43 -- host/digest.sh@60 -- # waitforlisten 84446 /var/tmp/bperf.sock 00:18:20.837 07:27:43 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:20.837 07:27:43 -- common/autotest_common.sh@829 -- # '[' -z 84446 ']' 00:18:20.837 07:27:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:20.837 07:27:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:20.837 07:27:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:20.837 07:27:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.837 07:27:43 -- common/autotest_common.sh@10 -- # set +x 00:18:20.837 [2024-11-28 07:27:43.104245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:20.837 [2024-11-28 07:27:43.104349] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84446 ] 00:18:21.096 [2024-11-28 07:27:43.239229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.096 [2024-11-28 07:27:43.363130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.033 07:27:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.033 07:27:44 -- common/autotest_common.sh@862 -- # return 0 00:18:22.033 07:27:44 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:22.033 07:27:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:22.292 07:27:44 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:22.292 07:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.292 07:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:22.292 07:27:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.292 07:27:44 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:22.292 07:27:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:22.552 nvme0n1 00:18:22.552 07:27:44 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:22.552 07:27:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.552 07:27:44 -- common/autotest_common.sh@10 -- # set +x 00:18:22.552 07:27:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.552 07:27:44 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:22.552 07:27:44 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:22.552 Running I/O for 2 seconds... 00:18:22.812 [2024-11-28 07:27:44.837916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.812 [2024-11-28 07:27:44.837998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:44.838030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:44.853827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:44.853885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:44.853930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:44.870038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:44.870091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:44.870121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:44.885254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:44.885295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:44.885333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:44.900688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:44.900744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:44.900773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:44.917214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:44.917257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:44.917286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:44.932587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:44.932626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:44.932654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:44.948523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:44.948583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:44.948613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:44.965065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:44.965110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:44.965139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:44.980486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:44.980527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:44.980557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:44.997004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:44.997046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:44.997076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:45.013601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:45.013659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:45.013673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:45.030303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:45.030380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:45.030395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:45.046538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:45.046575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:45.046603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:45.061734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:45.061769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:45.061798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.813 [2024-11-28 07:27:45.077123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:22.813 [2024-11-28 07:27:45.077159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.813 [2024-11-28 07:27:45.077186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.093954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.093991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.094020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.111515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.111571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.111599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.128236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.128274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.128303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.143316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.143351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.143378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.158843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.158883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.158926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.175943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.175986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.176015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.192334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.192374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.192403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.208178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.208231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.208259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.225092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.225138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.225166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.240895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.240950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.240979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.256013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.256052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.256106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.072 [2024-11-28 07:27:45.271283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.072 [2024-11-28 07:27:45.271329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.072 [2024-11-28 07:27:45.271358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.073 [2024-11-28 07:27:45.286431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.073 [2024-11-28 07:27:45.286465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.073 [2024-11-28 07:27:45.286493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.073 [2024-11-28 07:27:45.301612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.073 [2024-11-28 07:27:45.301649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.073 [2024-11-28 07:27:45.301677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.073 [2024-11-28 07:27:45.316787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.073 [2024-11-28 07:27:45.316825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:93 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.073 [2024-11-28 07:27:45.316853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.073 [2024-11-28 07:27:45.331795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.073 [2024-11-28 07:27:45.331835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.073 [2024-11-28 07:27:45.331863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.332 [2024-11-28 07:27:45.347677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.332 [2024-11-28 07:27:45.347734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.332 [2024-11-28 07:27:45.347747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.332 [2024-11-28 07:27:45.363284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.332 [2024-11-28 07:27:45.363333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.332 [2024-11-28 07:27:45.363361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.332 [2024-11-28 07:27:45.378449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.332 [2024-11-28 07:27:45.378488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.332 [2024-11-28 07:27:45.378516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.332 [2024-11-28 07:27:45.393656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.332 [2024-11-28 07:27:45.393698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.332 [2024-11-28 07:27:45.393726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.332 [2024-11-28 07:27:45.408872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.332 [2024-11-28 07:27:45.408915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.332 [2024-11-28 07:27:45.408943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.332 [2024-11-28 07:27:45.425153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.332 [2024-11-28 07:27:45.425199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.332 [2024-11-28 07:27:45.425228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.332 [2024-11-28 07:27:45.440276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.332 [2024-11-28 07:27:45.440330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.332 [2024-11-28 07:27:45.440361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.332 [2024-11-28 07:27:45.455328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.332 [2024-11-28 07:27:45.455366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.332 [2024-11-28 07:27:45.455394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.332 [2024-11-28 07:27:45.470340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.332 [2024-11-28 07:27:45.470379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.332 [2024-11-28 07:27:45.470407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.332 [2024-11-28 07:27:45.485420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.332 [2024-11-28 07:27:45.485459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.332 [2024-11-28 07:27:45.485487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.332 [2024-11-28 07:27:45.500711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.332 [2024-11-28 07:27:45.500760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.333 [2024-11-28 07:27:45.500788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.333 [2024-11-28 07:27:45.515877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.333 [2024-11-28 07:27:45.515921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.333 [2024-11-28 07:27:45.515949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.333 [2024-11-28 07:27:45.531145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.333 [2024-11-28 07:27:45.531182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.333 [2024-11-28 07:27:45.531209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.333 [2024-11-28 07:27:45.546271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.333 [2024-11-28 07:27:45.546331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.333 [2024-11-28 07:27:45.546344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.333 [2024-11-28 07:27:45.561248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.333 [2024-11-28 07:27:45.561286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.333 [2024-11-28 07:27:45.561314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.333 [2024-11-28 07:27:45.576110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.333 [2024-11-28 07:27:45.576146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.333 [2024-11-28 07:27:45.576174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.333 [2024-11-28 07:27:45.590966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.333 [2024-11-28 07:27:45.591000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.333 [2024-11-28 07:27:45.591027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.333 [2024-11-28 07:27:45.606485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.333 [2024-11-28 07:27:45.606523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.333 [2024-11-28 07:27:45.606552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.622404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.622456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.622485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.637617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.637659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.637688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.652695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.652735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.652763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.667725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.667765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.667792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.683913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.683975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.684005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.699124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.699164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.699191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.714285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.714339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.714367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.729326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.729372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.729400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.744296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.744342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.744371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.759330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.759365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.759392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.774390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.774440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.774468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.789138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.789172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.789200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.803535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.803569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.803597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.824266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.824302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.824342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.838607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.838642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.838669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.593 [2024-11-28 07:27:45.853262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.593 [2024-11-28 07:27:45.853295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.593 [2024-11-28 07:27:45.853333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:45.868523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:45.868558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:45.868585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:45.883474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:45.883509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:45.883536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:45.898030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:45.898067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:45.898094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:45.912486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:45.912524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:45.912551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:45.926857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:45.926892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:45.926920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:45.942230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:45.942268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:45.942296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:45.957118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:45.957154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:45.957182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:45.971637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:45.971672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:45.971699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:45.986114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:45.986156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:45.986183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:46.000615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:46.000673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:46.000702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:46.015202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:46.015237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:46.015264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:46.030583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:46.030621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:46.030634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:46.045614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:46.045649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:46.045676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:46.060372] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:46.060408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:46.060435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:46.074747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:46.074782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:46.074809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:46.089204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:46.089239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:46.089266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:46.103546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:46.103599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:46.103627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.854 [2024-11-28 07:27:46.117966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:23.854 [2024-11-28 07:27:46.118001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.854 [2024-11-28 07:27:46.118028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.133625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.133661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.133688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.148181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.148218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.148246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.162808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.162843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.162871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.177674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.177724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.177752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.192745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.192782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.192809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.208841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.208883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.208911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.223572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.223609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.223636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.238183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.238220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.238247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.252641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.252675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.252703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.267150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.267185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.267212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.281782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.281821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.281849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.115 [2024-11-28 07:27:46.296305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.115 [2024-11-28 07:27:46.296351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.115 [2024-11-28 07:27:46.296379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.116 [2024-11-28 07:27:46.310736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.116 [2024-11-28 07:27:46.310772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.116 [2024-11-28 07:27:46.310800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.116 [2024-11-28 07:27:46.325184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.116 [2024-11-28 07:27:46.325220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.116 [2024-11-28 07:27:46.325247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.116 [2024-11-28 07:27:46.339528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.116 [2024-11-28 07:27:46.339563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.116 [2024-11-28 07:27:46.339590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.116 [2024-11-28 07:27:46.354423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.116 [2024-11-28 07:27:46.354456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.116 [2024-11-28 07:27:46.354483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.116 [2024-11-28 07:27:46.368815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.116 [2024-11-28 07:27:46.368850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.116 [2024-11-28 07:27:46.368876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.116 [2024-11-28 07:27:46.383190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.116 [2024-11-28 07:27:46.383225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.116 [2024-11-28 07:27:46.383252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.399028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.399064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.399091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.413727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.413762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.413789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.428236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.428271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.428298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.442695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.442730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.442758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.457641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.457677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.457705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.472906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.472940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.472967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.487406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.487440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.487467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.501876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.501911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.501938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.516223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.516261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.516288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.530586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.530619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.530645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.544965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.545000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.545027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.559305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.559348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.559375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.573866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.573901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.573928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.588321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.588369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.588398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.602790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.602825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.602853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.617295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.617338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.376 [2024-11-28 07:27:46.617366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.376 [2024-11-28 07:27:46.631702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.376 [2024-11-28 07:27:46.631737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.377 [2024-11-28 07:27:46.631764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.377 [2024-11-28 07:27:46.646236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.377 [2024-11-28 07:27:46.646293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.377 [2024-11-28 07:27:46.646330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 [2024-11-28 07:27:46.662032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.637 [2024-11-28 07:27:46.662067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.637 [2024-11-28 07:27:46.662094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 [2024-11-28 07:27:46.676680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.637 [2024-11-28 07:27:46.676715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.637 [2024-11-28 07:27:46.676741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 [2024-11-28 07:27:46.691046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.637 [2024-11-28 07:27:46.691086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.637 [2024-11-28 07:27:46.691114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 [2024-11-28 07:27:46.705831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.637 [2024-11-28 07:27:46.705869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.637 [2024-11-28 07:27:46.705897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 [2024-11-28 07:27:46.721709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.637 [2024-11-28 07:27:46.721762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.637 [2024-11-28 07:27:46.721790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 [2024-11-28 07:27:46.736471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.637 [2024-11-28 07:27:46.736535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.637 [2024-11-28 07:27:46.736563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 [2024-11-28 07:27:46.751082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.637 [2024-11-28 07:27:46.751122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.637 [2024-11-28 07:27:46.751149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 [2024-11-28 07:27:46.772061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.637 [2024-11-28 07:27:46.772159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.637 [2024-11-28 07:27:46.772190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 [2024-11-28 07:27:46.786827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.637 [2024-11-28 07:27:46.786874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.637 [2024-11-28 07:27:46.786902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 [2024-11-28 07:27:46.801335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.637 [2024-11-28 07:27:46.801370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.637 [2024-11-28 07:27:46.801398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 [2024-11-28 07:27:46.815762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xaca0b0) 00:18:24.637 [2024-11-28 07:27:46.815799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.637 [2024-11-28 07:27:46.815812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:24.637 00:18:24.637 Latency(us) 00:18:24.637 [2024-11-28T07:27:46.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.637 [2024-11-28T07:27:46.912Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:24.637 nvme0n1 : 2.00 16733.23 65.36 0.00 0.00 7644.54 6881.28 27763.43 00:18:24.637 [2024-11-28T07:27:46.912Z] =================================================================================================================== 00:18:24.637 [2024-11-28T07:27:46.912Z] Total : 16733.23 65.36 0.00 0.00 7644.54 6881.28 27763.43 00:18:24.637 0 00:18:24.637 07:27:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:24.637 07:27:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:24.637 07:27:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:24.637 07:27:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:24.637 | .driver_specific 00:18:24.637 | .nvme_error 00:18:24.637 | .status_code 00:18:24.637 | .command_transient_transport_error' 00:18:24.897 07:27:47 -- host/digest.sh@71 -- # (( 131 > 0 )) 00:18:24.897 07:27:47 -- host/digest.sh@73 -- # killprocess 84446 00:18:24.897 07:27:47 -- common/autotest_common.sh@936 -- # '[' -z 84446 ']' 00:18:24.897 07:27:47 -- common/autotest_common.sh@940 -- # kill -0 84446 00:18:24.897 07:27:47 -- common/autotest_common.sh@941 -- # uname 00:18:24.897 07:27:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:24.897 07:27:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84446 00:18:24.897 07:27:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:24.897 killing process with pid 84446 00:18:24.897 07:27:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:24.897 07:27:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84446' 00:18:24.897 07:27:47 -- common/autotest_common.sh@955 -- # kill 84446 00:18:24.897 Received shutdown signal, test time was about 2.000000 seconds 00:18:24.897 00:18:24.897 Latency(us) 00:18:24.897 [2024-11-28T07:27:47.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.897 [2024-11-28T07:27:47.172Z] =================================================================================================================== 00:18:24.897 [2024-11-28T07:27:47.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.897 07:27:47 -- common/autotest_common.sh@960 -- # wait 84446 00:18:25.156 07:27:47 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:18:25.156 07:27:47 -- host/digest.sh@54 -- # local rw bs qd 00:18:25.156 07:27:47 -- host/digest.sh@56 -- # rw=randread 00:18:25.156 07:27:47 -- host/digest.sh@56 -- # bs=131072 00:18:25.156 07:27:47 -- host/digest.sh@56 -- # qd=16 00:18:25.156 07:27:47 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:25.156 07:27:47 -- host/digest.sh@58 -- # bperfpid=84505 00:18:25.156 07:27:47 -- host/digest.sh@60 -- # waitforlisten 84505 /var/tmp/bperf.sock 00:18:25.156 07:27:47 -- common/autotest_common.sh@829 -- # '[' -z 84505 ']' 00:18:25.156 07:27:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:25.156 07:27:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:25.156 07:27:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:25.156 07:27:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.156 07:27:47 -- common/autotest_common.sh@10 -- # set +x 00:18:25.416 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:25.416 Zero copy mechanism will not be used. 00:18:25.416 [2024-11-28 07:27:47.449074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:25.416 [2024-11-28 07:27:47.449157] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84505 ] 00:18:25.416 [2024-11-28 07:27:47.579962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.676 [2024-11-28 07:27:47.693072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.246 07:27:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.246 07:27:48 -- common/autotest_common.sh@862 -- # return 0 00:18:26.246 07:27:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:26.246 07:27:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:26.512 07:27:48 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:26.512 07:27:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.512 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:18:26.512 07:27:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.512 07:27:48 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:26.512 07:27:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:27.082 nvme0n1 00:18:27.082 07:27:49 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:27.082 07:27:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.082 07:27:49 -- common/autotest_common.sh@10 -- # set +x 00:18:27.082 07:27:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.082 07:27:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:27.082 07:27:49 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:27.082 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:27.082 Zero copy mechanism will not be used. 00:18:27.082 Running I/O for 2 seconds... 00:18:27.082 [2024-11-28 07:27:49.240784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.240844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.240873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.244551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.244589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.244602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.248377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.248410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.248423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.252118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.252154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.252167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.255805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.255840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.255867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.259535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.259570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.259597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.263260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.263295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.263334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.266968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.267004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.267030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.270638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.270674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.270701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.274433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.274467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.274494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.278145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.278180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.278207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.281906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.281941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.281968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.285635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.285669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.285696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.289307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.289349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.289377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.293050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.293084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.293111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.296815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.296849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.296877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.300606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.300639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.300666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.304367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.304402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.304413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.308022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.308055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.308108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.311755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.082 [2024-11-28 07:27:49.311790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.082 [2024-11-28 07:27:49.311817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.082 [2024-11-28 07:27:49.315514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.083 [2024-11-28 07:27:49.315547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.083 [2024-11-28 07:27:49.315575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.083 [2024-11-28 07:27:49.319176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.083 [2024-11-28 07:27:49.319211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.083 [2024-11-28 07:27:49.319238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.083 [2024-11-28 07:27:49.322952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.083 [2024-11-28 07:27:49.322987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.083 [2024-11-28 07:27:49.323013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.083 [2024-11-28 07:27:49.326676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.083 [2024-11-28 07:27:49.326710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.083 [2024-11-28 07:27:49.326736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.083 [2024-11-28 07:27:49.330406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.083 [2024-11-28 07:27:49.330441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.083 [2024-11-28 07:27:49.330468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.083 [2024-11-28 07:27:49.334172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.083 [2024-11-28 07:27:49.334207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.083 [2024-11-28 07:27:49.334234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.083 [2024-11-28 07:27:49.337852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.083 [2024-11-28 07:27:49.337886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.083 [2024-11-28 07:27:49.337912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.083 [2024-11-28 07:27:49.341525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.083 [2024-11-28 07:27:49.341558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.083 [2024-11-28 07:27:49.341586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.083 [2024-11-28 07:27:49.345220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.083 [2024-11-28 07:27:49.345253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.083 [2024-11-28 07:27:49.345280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.083 [2024-11-28 07:27:49.348959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.083 [2024-11-28 07:27:49.348993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.083 [2024-11-28 07:27:49.349021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.083 [2024-11-28 07:27:49.353033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.083 [2024-11-28 07:27:49.353068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.083 [2024-11-28 07:27:49.353095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.357258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.357293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.357305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.361257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.361292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.361330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.365067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.365102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.365129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.368892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.368927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.368953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.372657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.372690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.372716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.376364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.376399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.376426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.380143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.380179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.380191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.383792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.383827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.383854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.387557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.387590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.387617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.391303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.391348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.391375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.394958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.394992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.395018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.398672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.398706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.398732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.345 [2024-11-28 07:27:49.402458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.345 [2024-11-28 07:27:49.402491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.345 [2024-11-28 07:27:49.402518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.406097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.406131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.406157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.409742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.409776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.409802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.413422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.413454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.413482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.417159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.417193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.417220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.420960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.420994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.421021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.424996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.425046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.425074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.429243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.429279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.429322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.433134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.433170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.433197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.437018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.437054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.437081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.440772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.440806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.440834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.444477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.444511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.444538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.448219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.448256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.448268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.451932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.451966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.451993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.455649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.455682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.455708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.459367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.459399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.459426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.463083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.463118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.463144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.466748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.466782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.466809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.470489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.470523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.470535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.474160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.474194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.474221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.477894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.477929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.477955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.481693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.481727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.481753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.485416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.485450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.485476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.489256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.489290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.489317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.493023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.493057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.493084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.496796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.496830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.496856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.500482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.500545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.500571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.346 [2024-11-28 07:27:49.504218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.346 [2024-11-28 07:27:49.504252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.346 [2024-11-28 07:27:49.504263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.507925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.507959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.507986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.511676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.511709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.511736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.515392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.515425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.515452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.519133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.519167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.519194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.523030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.523064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.523091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.526746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.526781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.526808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.530431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.530465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.530492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.534103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.534137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.534163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.537884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.537919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.537946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.541593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.541627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.541654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.545305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.545348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.545375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.548947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.548980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.549007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.552655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.552689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.552715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.556255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.556289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.556300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.559917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.559951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.559978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.563620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.563653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.563680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.567290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.567334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.567361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.571017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.571052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.571079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.574737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.574771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.574797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.578373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.578407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.578434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.582086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.582120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.582147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.585743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.585777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.585803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.589410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.589443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.589469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.593074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.593108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.593134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.596768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.596802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.596829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.600433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.600467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.600479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.347 [2024-11-28 07:27:49.604122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.347 [2024-11-28 07:27:49.604157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.347 [2024-11-28 07:27:49.604169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.348 [2024-11-28 07:27:49.607781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.348 [2024-11-28 07:27:49.607815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.348 [2024-11-28 07:27:49.607841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.348 [2024-11-28 07:27:49.611500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.348 [2024-11-28 07:27:49.611534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.348 [2024-11-28 07:27:49.611560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.348 [2024-11-28 07:27:49.615516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.348 [2024-11-28 07:27:49.615565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.348 [2024-11-28 07:27:49.615576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.609 [2024-11-28 07:27:49.619685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.609 [2024-11-28 07:27:49.619718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.609 [2024-11-28 07:27:49.619744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.609 [2024-11-28 07:27:49.623405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.609 [2024-11-28 07:27:49.623437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.609 [2024-11-28 07:27:49.623464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.609 [2024-11-28 07:27:49.627474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.609 [2024-11-28 07:27:49.627507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.609 [2024-11-28 07:27:49.627533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.609 [2024-11-28 07:27:49.631212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.609 [2024-11-28 07:27:49.631247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.609 [2024-11-28 07:27:49.631274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.609 [2024-11-28 07:27:49.634889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.609 [2024-11-28 07:27:49.634922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.634949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.638691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.638725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.638752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.642457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.642491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.642518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.646242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.646275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.646302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.649916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.649950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.649977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.653673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.653706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.653733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.657422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.657456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.657483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.661113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.661148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.661175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.665000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.665035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.665062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.669028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.669063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.669090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.673106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.673142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.673169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.677188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.677224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.677251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.681089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.681124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.681151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.685524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.685558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.685586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.689987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.690022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.690049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.693951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.693985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.694012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.698705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.698740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.698768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.702620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.702655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.702682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.706504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.706539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.706567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.710431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.710465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.710492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.714238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.714273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.714300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.718016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.718050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.718077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.721859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.721894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.721921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.725677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.725711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.725739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.729497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.729531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.729557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.733351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.733385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.733413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.737249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.737284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.610 [2024-11-28 07:27:49.737311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.610 [2024-11-28 07:27:49.741185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.610 [2024-11-28 07:27:49.741223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.741235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.744932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.744966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.744993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.748763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.748797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.748823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.752553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.752587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.752615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.756309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.756354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.756367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.760051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.760109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.760122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.763766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.763799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.763826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.767556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.767591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.767618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.771285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.771331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.771360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.775290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.775335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.775363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.779089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.779123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.779150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.782866] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.782900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.782927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.786628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.786662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.786690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.790404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.790438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.790465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.794174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.794210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.794237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.797966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.798000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.798026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.801862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.801896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.801922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.805621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.805655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.805683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.809360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.809393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.809420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.813032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.813066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.813093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.816745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.816780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.816807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.820457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.820506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.820533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.824185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.824219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.824231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.827894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.827927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.827953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.831614] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.831647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.831674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.835267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.835300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.835336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.839072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.839107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.611 [2024-11-28 07:27:49.839134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.611 [2024-11-28 07:27:49.842697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.611 [2024-11-28 07:27:49.842731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.612 [2024-11-28 07:27:49.842757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.612 [2024-11-28 07:27:49.846406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.612 [2024-11-28 07:27:49.846439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.612 [2024-11-28 07:27:49.846466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.612 [2024-11-28 07:27:49.850169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.612 [2024-11-28 07:27:49.850204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.612 [2024-11-28 07:27:49.850231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.612 [2024-11-28 07:27:49.853931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.612 [2024-11-28 07:27:49.853965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.612 [2024-11-28 07:27:49.853991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.612 [2024-11-28 07:27:49.857627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.612 [2024-11-28 07:27:49.857660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.612 [2024-11-28 07:27:49.857687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.612 [2024-11-28 07:27:49.861292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.612 [2024-11-28 07:27:49.861334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.612 [2024-11-28 07:27:49.861362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.612 [2024-11-28 07:27:49.864980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.612 [2024-11-28 07:27:49.865013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.612 [2024-11-28 07:27:49.865040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.612 [2024-11-28 07:27:49.868718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.612 [2024-11-28 07:27:49.868751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.612 [2024-11-28 07:27:49.868777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.612 [2024-11-28 07:27:49.872481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.612 [2024-11-28 07:27:49.872530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.612 [2024-11-28 07:27:49.872556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.612 [2024-11-28 07:27:49.876132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.612 [2024-11-28 07:27:49.876167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.612 [2024-11-28 07:27:49.876178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.612 [2024-11-28 07:27:49.880207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.612 [2024-11-28 07:27:49.880245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.612 [2024-11-28 07:27:49.880258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.873 [2024-11-28 07:27:49.884456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.873 [2024-11-28 07:27:49.884505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.873 [2024-11-28 07:27:49.884517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.873 [2024-11-28 07:27:49.888204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.873 [2024-11-28 07:27:49.888241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.873 [2024-11-28 07:27:49.888253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.873 [2024-11-28 07:27:49.892399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.873 [2024-11-28 07:27:49.892464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.873 [2024-11-28 07:27:49.892477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.873 [2024-11-28 07:27:49.896252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.873 [2024-11-28 07:27:49.896289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.873 [2024-11-28 07:27:49.896301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.873 [2024-11-28 07:27:49.899921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.873 [2024-11-28 07:27:49.899953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.873 [2024-11-28 07:27:49.899980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.873 [2024-11-28 07:27:49.903739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.873 [2024-11-28 07:27:49.903773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.873 [2024-11-28 07:27:49.903799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.907421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.907454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.907481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.911144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.911177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.911204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.914808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.914842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.914868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.918506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.918538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.918565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.922169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.922203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.922230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.925855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.925889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.925915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.929526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.929559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.929585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.933172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.933204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.933232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.936849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.936884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.936911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.940620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.940655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.940683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.944736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.944770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.944797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.949006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.949040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.949066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.952792] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.952825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.952851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.956545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.956578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.956604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.960210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.960245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.960257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.963819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.963852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.963879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.967578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.967611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.967637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.971337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.971370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.971398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.975132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.975166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.975193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.978891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.978924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.978950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.982555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.982589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.982616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.986223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.986257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.986283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.989920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.989954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.989981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.993629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.993662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.993688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:49.997285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:49.997330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:49.997358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:50.000978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:50.001011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:50.001038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.874 [2024-11-28 07:27:50.004713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.874 [2024-11-28 07:27:50.004747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.874 [2024-11-28 07:27:50.004774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.008452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.008485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.008496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.012042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.012098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.012126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.015727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.015761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.015788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.019406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.019440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.019467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.023002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.023036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.023063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.026700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.026733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.026760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.030403] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.030436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.030462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.034112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.034147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.034174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.037891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.037925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.037952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.041627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.041661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.041688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.045308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.045349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.045376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.049047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.049082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.049109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.052771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.052805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.052832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.056529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.056561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.056588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.060240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.060275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.060286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.063874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.063908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.063935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.067557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.067590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.067617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.071168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.071201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.071227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.074925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.074958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.074985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.078660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.078694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.078720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.082369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.082402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.082429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.086118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.086152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.086179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.089826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.089859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.089885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.093466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.093499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.093525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.097171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.097205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.097232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.100848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.100881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.875 [2024-11-28 07:27:50.100908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.875 [2024-11-28 07:27:50.104609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.875 [2024-11-28 07:27:50.104643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.876 [2024-11-28 07:27:50.104671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.876 [2024-11-28 07:27:50.108309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.876 [2024-11-28 07:27:50.108352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.876 [2024-11-28 07:27:50.108364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.876 [2024-11-28 07:27:50.112006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.876 [2024-11-28 07:27:50.112041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.876 [2024-11-28 07:27:50.112068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.876 [2024-11-28 07:27:50.115832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.876 [2024-11-28 07:27:50.115867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.876 [2024-11-28 07:27:50.115894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.876 [2024-11-28 07:27:50.119626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.876 [2024-11-28 07:27:50.119661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.876 [2024-11-28 07:27:50.119688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.876 [2024-11-28 07:27:50.123701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.876 [2024-11-28 07:27:50.123764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.876 [2024-11-28 07:27:50.123791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.876 [2024-11-28 07:27:50.127661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.876 [2024-11-28 07:27:50.127726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.876 [2024-11-28 07:27:50.127753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:27.876 [2024-11-28 07:27:50.131567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.876 [2024-11-28 07:27:50.131610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.876 [2024-11-28 07:27:50.131621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:27.876 [2024-11-28 07:27:50.135307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.876 [2024-11-28 07:27:50.135372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.876 [2024-11-28 07:27:50.135384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:27.876 [2024-11-28 07:27:50.139137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.876 [2024-11-28 07:27:50.139171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.876 [2024-11-28 07:27:50.139198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:27.876 [2024-11-28 07:27:50.143138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:27.876 [2024-11-28 07:27:50.143172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.876 [2024-11-28 07:27:50.143198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.147419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.147465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.147492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.151220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.151254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.151282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.155451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.155485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.155511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.159199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.159233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.159260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.162930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.162965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.162991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.166677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.166712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.166738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.170455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.170488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.170514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.174207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.174241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.174268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.177953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.177986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.178013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.181755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.181789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.181816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.185479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.185512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.185539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.189174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.189208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.189235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.192907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.192941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.192968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.196634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.196668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.196695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.200438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.200504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.200531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.204758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.204823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.204850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.208724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.208759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.208785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.212385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.212467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.212493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.216157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.216193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.216205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.138 [2024-11-28 07:27:50.219814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.138 [2024-11-28 07:27:50.219847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.138 [2024-11-28 07:27:50.219874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.223595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.223629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.223655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.227252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.227285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.227311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.230933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.230966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.230993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.234622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.234656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.234682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.238469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.238502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.238530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.242216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.242250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.242277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.246044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.246077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.246103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.249828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.249861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.249887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.253669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.253702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.253730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.257374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.257407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.257433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.261102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.261136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.261163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.264827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.264860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.264887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.268573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.268606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.268632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.272183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.272217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.272229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.275853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.275887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.275913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.279487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.279521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.279548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.283127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.283161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.283187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.286845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.286878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.286904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.290554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.290587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.290613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.294286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.294330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.294357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.298045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.298079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.298105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.301735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.301768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.301795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.305461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.305493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.305520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.309082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.309116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.309142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.312771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.312805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.312832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.316394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.316429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.139 [2024-11-28 07:27:50.316440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.139 [2024-11-28 07:27:50.319923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.139 [2024-11-28 07:27:50.319956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.319983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.323662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.323696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.323724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.327357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.327389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.327416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.331036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.331070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.331097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.334754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.334788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.334814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.338424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.338458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.338484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.342044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.342078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.342105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.345759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.345793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.345820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.349444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.349477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.349504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.353209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.353243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.353270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.356961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.356996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.357022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.360657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.360690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.360717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.364268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.364303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.364326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.367978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.368014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.368041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.371688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.371722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.371749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.375284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.375330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.375358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.378993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.379027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.379054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.382679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.382713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.382739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.386430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.386463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.386490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.390132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.390166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.390193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.393807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.393841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.393867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.397463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.397496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.397523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.401145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.401179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.401206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.404863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.404897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.404923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.140 [2024-11-28 07:27:50.408999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.140 [2024-11-28 07:27:50.409033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.140 [2024-11-28 07:27:50.409060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.402 [2024-11-28 07:27:50.413163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.402 [2024-11-28 07:27:50.413196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.402 [2024-11-28 07:27:50.413223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.402 [2024-11-28 07:27:50.416848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.402 [2024-11-28 07:27:50.416898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.402 [2024-11-28 07:27:50.416925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.402 [2024-11-28 07:27:50.421026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.402 [2024-11-28 07:27:50.421060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.402 [2024-11-28 07:27:50.421087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.402 [2024-11-28 07:27:50.424757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.402 [2024-11-28 07:27:50.424790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.402 [2024-11-28 07:27:50.424816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.402 [2024-11-28 07:27:50.428543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.402 [2024-11-28 07:27:50.428575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.402 [2024-11-28 07:27:50.428602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.402 [2024-11-28 07:27:50.432231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.402 [2024-11-28 07:27:50.432265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.402 [2024-11-28 07:27:50.432276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.402 [2024-11-28 07:27:50.435842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.402 [2024-11-28 07:27:50.435875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.402 [2024-11-28 07:27:50.435902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.402 [2024-11-28 07:27:50.439661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.402 [2024-11-28 07:27:50.439696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.402 [2024-11-28 07:27:50.439722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.443338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.443372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.443398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.447078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.447112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.447139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.450799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.450832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.450859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.454539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.454572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.454599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.458465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.458502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.458529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.462631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.462666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.462677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.466858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.466892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.466919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.470588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.470622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.470648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.474389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.474424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.474450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.478131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.478165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.478191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.482091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.482126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.482153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.485851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.485885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.485911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.489582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.489615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.489640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.493377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.493410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.493437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.497120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.497155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.497181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.500884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.500917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.500943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.504674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.504707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.504734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.508311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.508358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.508371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.512023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.512056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.512109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.515810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.515844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.515870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.519502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.519535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.519561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.523247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.523280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.523306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.527017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.527051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.527077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.530730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.530763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.530790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.534496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.534530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.534556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.538182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.403 [2024-11-28 07:27:50.538215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.403 [2024-11-28 07:27:50.538241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.403 [2024-11-28 07:27:50.541975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.542008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.542034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.545825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.545859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.545886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.549597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.549630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.549657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.553344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.553387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.553413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.557093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.557126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.557153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.560777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.560810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.560837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.564463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.564511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.564537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.568126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.568160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.568172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.571748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.571781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.571808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.575502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.575549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.575576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.579214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.579248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.579275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.582931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.582965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.582993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.586607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.586641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.586667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.590325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.590357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.590384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.594073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.594107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.594133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.597849] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.597884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.597910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.601682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.601715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.601741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.605451] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.605484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.605510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.609195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.609229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.609256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.612969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.613002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.613028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.616623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.616656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.616683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.620355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.620389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.620401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.623950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.623983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.624010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.627626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.627659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.627685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.631287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.631334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.631361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.634995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.635028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.404 [2024-11-28 07:27:50.635055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.404 [2024-11-28 07:27:50.638748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.404 [2024-11-28 07:27:50.638781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.405 [2024-11-28 07:27:50.638807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.405 [2024-11-28 07:27:50.642409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.405 [2024-11-28 07:27:50.642442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.405 [2024-11-28 07:27:50.642469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.405 [2024-11-28 07:27:50.646075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.405 [2024-11-28 07:27:50.646109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.405 [2024-11-28 07:27:50.646136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.405 [2024-11-28 07:27:50.649752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.405 [2024-11-28 07:27:50.649785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.405 [2024-11-28 07:27:50.649812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.405 [2024-11-28 07:27:50.653482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.405 [2024-11-28 07:27:50.653531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.405 [2024-11-28 07:27:50.653557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.405 [2024-11-28 07:27:50.657151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.405 [2024-11-28 07:27:50.657186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.405 [2024-11-28 07:27:50.657213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.405 [2024-11-28 07:27:50.660825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.405 [2024-11-28 07:27:50.660859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.405 [2024-11-28 07:27:50.660886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.405 [2024-11-28 07:27:50.664514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.405 [2024-11-28 07:27:50.664547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.405 [2024-11-28 07:27:50.664573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.405 [2024-11-28 07:27:50.668159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.405 [2024-11-28 07:27:50.668194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.405 [2024-11-28 07:27:50.668206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.405 [2024-11-28 07:27:50.672042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.405 [2024-11-28 07:27:50.672100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.405 [2024-11-28 07:27:50.672129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.676366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.676444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.676472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.680036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.680069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.680120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.684377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.684429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.684456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.688002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.688035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.688061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.691772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.691806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.691832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.695519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.695552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.695578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.699222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.699256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.699282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.702927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.702962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.702989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.706737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.706771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.706798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.710466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.710499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.710526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.714157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.714190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.714216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.718099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.718150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.718176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.722301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.722344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.722371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.726493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.726527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.726554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.730276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.730334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.730346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.734021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.734055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.734081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.737735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.737769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.737795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.741527] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.741560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.741587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.745269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.745303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.745341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.748942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.748975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.749003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.752637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.752669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.752695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.756371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.756407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.756419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.759993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.760026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.760052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.763768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.763802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.763828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.767510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.767543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.767570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.771171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.667 [2024-11-28 07:27:50.771205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.667 [2024-11-28 07:27:50.771231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.667 [2024-11-28 07:27:50.774956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.774990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.775016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.778643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.778676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.778703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.782363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.782396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.782422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.786060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.786094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.786121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.789754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.789788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.789814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.793492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.793525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.793552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.797304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.797349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.797377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.801204] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.801238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.801265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.805184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.805218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.805245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.809481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.809515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.809543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.813558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.813592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.813619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.817532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.817575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.817602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.821534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.821569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.821596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.825515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.825548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.825575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.829433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.829465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.829492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.833215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.833249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.833276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.836982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.837017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.837044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.840789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.840822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.840849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.844448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.844482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.844493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.848028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.848062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.848112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.851769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.851801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.851828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.855610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.855642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.855669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.859257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.859291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.859317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.862970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.863003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.863030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.866668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.866701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.866729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.870544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.870577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.870589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.874203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.668 [2024-11-28 07:27:50.874237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.668 [2024-11-28 07:27:50.874263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.668 [2024-11-28 07:27:50.878004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.878038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.878065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.881793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.881828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.881854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.885648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.885681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.885707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.889423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.889459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.889486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.893139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.893173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.893200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.896862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.896895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.896921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.900586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.900619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.900646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.904272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.904318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.904331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.907877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.907911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.907937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.911657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.911690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.911716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.915418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.915466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.915492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.919062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.919095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.919122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.922760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.922794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.922821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.926448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.926480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.926507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.930308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.930363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.930391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.934031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.934065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.934091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.669 [2024-11-28 07:27:50.938272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.669 [2024-11-28 07:27:50.938331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.669 [2024-11-28 07:27:50.938344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.942440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.942473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.942500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.946374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.946428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.946439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.950340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.950373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.950400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.954166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.954217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.954244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.957861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.957894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.957920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.961647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.961679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.961691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.965513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.965545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.965571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.969260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.969293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.969330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.973047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.973081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.973107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.977036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.977070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.977097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.981376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.981420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.981447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.985805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.985839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.985866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.989856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.989891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.989918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.993667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.993703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.931 [2024-11-28 07:27:50.993730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.931 [2024-11-28 07:27:50.997358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.931 [2024-11-28 07:27:50.997390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:50.997417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.001186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.001220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.001247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.004996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.005029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.005055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.008810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.008843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.008869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.012558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.012591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.012634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.016319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.016363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.016375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.020030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.020064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.020117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.023879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.023913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.023939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.027571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.027606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.027632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.031279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.031339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.031353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.034985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.035020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.035046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.038701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.038735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.038761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.042423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.042458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.042485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.046256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.046290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.046317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.050088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.050122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.050148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.053860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.053895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.053922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.057624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.057657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.057683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.061312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.061357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.061384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.065061] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.065096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.065122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.068827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.068862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.068889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.072582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.072615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.072641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.076435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.076469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.076480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.080159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.080196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.080207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.083853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.083887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.083913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.087631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.087666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.087693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.091330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.091368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.091394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.095111] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.095146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.932 [2024-11-28 07:27:51.095172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.932 [2024-11-28 07:27:51.098857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.932 [2024-11-28 07:27:51.098892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.098919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.102539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.102573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.102600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.106768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.106802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.106829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.110453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.110487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.110513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.114223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.114258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.114285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.117951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.117986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.118013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.121715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.121749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.121776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.125436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.125468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.125495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.129272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.129333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.129347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.133170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.133205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.133231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.137055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.137089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.137116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.140927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.140961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.140987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.144943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.144977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.145004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.148997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.149032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.149058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.152917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.152952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.152979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.156847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.156881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.156908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.160572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.160606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.160633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.164218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.164253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.164265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.167905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.167941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.167967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.171565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.171598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.171624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.175256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.175291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.175318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.178929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.178964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.178991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.182624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.182657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.182684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.186357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.186391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.186418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.190000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.190034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.190060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.193679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.193714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.193740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.197289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.197332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.933 [2024-11-28 07:27:51.197360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:28.933 [2024-11-28 07:27:51.201375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:28.933 [2024-11-28 07:27:51.201464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:28.934 [2024-11-28 07:27:51.201493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.192 [2024-11-28 07:27:51.205629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:29.192 [2024-11-28 07:27:51.205664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.192 [2024-11-28 07:27:51.205691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.192 [2024-11-28 07:27:51.209346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:29.192 [2024-11-28 07:27:51.209378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.192 [2024-11-28 07:27:51.209405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.192 [2024-11-28 07:27:51.213400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:29.192 [2024-11-28 07:27:51.213432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.192 [2024-11-28 07:27:51.213458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.192 [2024-11-28 07:27:51.217167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:29.192 [2024-11-28 07:27:51.217201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.192 [2024-11-28 07:27:51.217227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.192 [2024-11-28 07:27:51.220902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:29.192 [2024-11-28 07:27:51.220936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.192 [2024-11-28 07:27:51.220962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:29.192 [2024-11-28 07:27:51.224626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:29.192 [2024-11-28 07:27:51.224659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.192 [2024-11-28 07:27:51.224686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:29.192 [2024-11-28 07:27:51.228305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:29.192 [2024-11-28 07:27:51.228351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.192 [2024-11-28 07:27:51.228363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:29.192 [2024-11-28 07:27:51.232007] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x123e680) 00:18:29.192 [2024-11-28 07:27:51.232040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:29.192 [2024-11-28 07:27:51.232067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:29.192 00:18:29.192 Latency(us) 00:18:29.192 [2024-11-28T07:27:51.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.192 [2024-11-28T07:27:51.467Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:29.192 nvme0n1 : 2.00 8186.17 1023.27 0.00 0.00 1951.81 1660.74 6553.60 00:18:29.192 [2024-11-28T07:27:51.467Z] =================================================================================================================== 00:18:29.192 [2024-11-28T07:27:51.467Z] Total : 8186.17 1023.27 0.00 0.00 1951.81 1660.74 6553.60 00:18:29.192 0 00:18:29.192 07:27:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:29.192 07:27:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:29.192 07:27:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:29.192 07:27:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:29.192 | .driver_specific 00:18:29.192 | .nvme_error 00:18:29.192 | .status_code 00:18:29.192 | .command_transient_transport_error' 00:18:29.451 07:27:51 -- host/digest.sh@71 -- # (( 528 > 0 )) 00:18:29.451 07:27:51 -- host/digest.sh@73 -- # killprocess 84505 00:18:29.451 07:27:51 -- common/autotest_common.sh@936 -- # '[' -z 84505 ']' 00:18:29.451 07:27:51 -- common/autotest_common.sh@940 -- # kill -0 84505 00:18:29.451 07:27:51 -- common/autotest_common.sh@941 -- # uname 00:18:29.451 07:27:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:29.451 07:27:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84505 00:18:29.451 07:27:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:29.451 killing process with pid 84505 00:18:29.451 07:27:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:29.451 07:27:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84505' 00:18:29.451 Received shutdown signal, test time was about 2.000000 seconds 00:18:29.451 00:18:29.451 Latency(us) 00:18:29.451 [2024-11-28T07:27:51.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.451 [2024-11-28T07:27:51.726Z] =================================================================================================================== 00:18:29.451 [2024-11-28T07:27:51.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.451 07:27:51 -- common/autotest_common.sh@955 -- # kill 84505 00:18:29.451 07:27:51 -- common/autotest_common.sh@960 -- # wait 84505 00:18:29.711 07:27:51 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:18:29.711 07:27:51 -- host/digest.sh@54 -- # local rw bs qd 00:18:29.711 07:27:51 -- host/digest.sh@56 -- # rw=randwrite 00:18:29.711 07:27:51 -- host/digest.sh@56 -- # bs=4096 00:18:29.711 07:27:51 -- host/digest.sh@56 -- # qd=128 00:18:29.711 07:27:51 -- host/digest.sh@58 -- # bperfpid=84561 00:18:29.711 07:27:51 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:29.711 07:27:51 -- host/digest.sh@60 -- # waitforlisten 84561 /var/tmp/bperf.sock 00:18:29.711 07:27:51 -- common/autotest_common.sh@829 -- # '[' -z 84561 ']' 00:18:29.711 07:27:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:29.711 07:27:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:29.711 07:27:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:29.711 07:27:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.711 07:27:51 -- common/autotest_common.sh@10 -- # set +x 00:18:29.711 [2024-11-28 07:27:51.880790] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:29.711 [2024-11-28 07:27:51.880876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84561 ] 00:18:29.970 [2024-11-28 07:27:52.013295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.970 [2024-11-28 07:27:52.127094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.908 07:27:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.908 07:27:52 -- common/autotest_common.sh@862 -- # return 0 00:18:30.908 07:27:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:30.908 07:27:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:30.908 07:27:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:30.908 07:27:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.908 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:18:30.908 07:27:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.908 07:27:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:30.908 07:27:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:31.476 nvme0n1 00:18:31.476 07:27:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:31.476 07:27:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.476 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:18:31.476 07:27:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.476 07:27:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:31.476 07:27:53 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:31.476 Running I/O for 2 seconds... 00:18:31.476 [2024-11-28 07:27:53.602976] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ddc00 00:18:31.476 [2024-11-28 07:27:53.604388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.476 [2024-11-28 07:27:53.604436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:31.476 [2024-11-28 07:27:53.617931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:31.476 [2024-11-28 07:27:53.619189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.476 [2024-11-28 07:27:53.619224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.476 [2024-11-28 07:27:53.632180] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ff3c8 00:18:31.476 [2024-11-28 07:27:53.633378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.476 [2024-11-28 07:27:53.633408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:31.476 [2024-11-28 07:27:53.646953] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190feb58 00:18:31.476 [2024-11-28 07:27:53.648458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.476 [2024-11-28 07:27:53.648508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:31.476 [2024-11-28 07:27:53.661509] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fe720 00:18:31.476 [2024-11-28 07:27:53.662696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.476 [2024-11-28 07:27:53.662728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:31.476 [2024-11-28 07:27:53.675259] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fe2e8 00:18:31.476 [2024-11-28 07:27:53.676585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.476 [2024-11-28 07:27:53.676617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:31.476 [2024-11-28 07:27:53.689533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fdeb0 00:18:31.476 [2024-11-28 07:27:53.690697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.476 [2024-11-28 07:27:53.690730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:31.476 [2024-11-28 07:27:53.703299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fda78 00:18:31.476 [2024-11-28 07:27:53.704512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.476 [2024-11-28 07:27:53.704546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:31.476 [2024-11-28 07:27:53.717707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fd640 00:18:31.476 [2024-11-28 07:27:53.718882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.476 [2024-11-28 07:27:53.718917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:31.476 [2024-11-28 07:27:53.731521] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fd208 00:18:31.476 [2024-11-28 07:27:53.732722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.476 [2024-11-28 07:27:53.732755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:31.476 [2024-11-28 07:27:53.745635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fcdd0 00:18:31.476 [2024-11-28 07:27:53.746949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.476 [2024-11-28 07:27:53.746983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.760991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fc998 00:18:31.736 [2024-11-28 07:27:53.762122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.736 [2024-11-28 07:27:53.762155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.775432] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fc560 00:18:31.736 [2024-11-28 07:27:53.776770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.736 [2024-11-28 07:27:53.776818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.790650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fc128 00:18:31.736 [2024-11-28 07:27:53.791808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.736 [2024-11-28 07:27:53.791839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.805348] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fbcf0 00:18:31.736 [2024-11-28 07:27:53.806567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.736 [2024-11-28 07:27:53.806599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.819888] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fb8b8 00:18:31.736 [2024-11-28 07:27:53.821103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.736 [2024-11-28 07:27:53.821136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.835151] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fb480 00:18:31.736 [2024-11-28 07:27:53.836298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.736 [2024-11-28 07:27:53.836363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.850073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fb048 00:18:31.736 [2024-11-28 07:27:53.851118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.736 [2024-11-28 07:27:53.851148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.865426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fac10 00:18:31.736 [2024-11-28 07:27:53.866476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.736 [2024-11-28 07:27:53.866510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.879422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fa7d8 00:18:31.736 [2024-11-28 07:27:53.880485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.736 [2024-11-28 07:27:53.880517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.892975] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fa3a0 00:18:31.736 [2024-11-28 07:27:53.893994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.736 [2024-11-28 07:27:53.894025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.906302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f9f68 00:18:31.736 [2024-11-28 07:27:53.907308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.736 [2024-11-28 07:27:53.907348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:31.736 [2024-11-28 07:27:53.920115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f9b30 00:18:31.736 [2024-11-28 07:27:53.921145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.737 [2024-11-28 07:27:53.921177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:31.737 [2024-11-28 07:27:53.933445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f96f8 00:18:31.737 [2024-11-28 07:27:53.934514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.737 [2024-11-28 07:27:53.934563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:31.737 [2024-11-28 07:27:53.947401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f92c0 00:18:31.737 [2024-11-28 07:27:53.948457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.737 [2024-11-28 07:27:53.948489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:31.737 [2024-11-28 07:27:53.961485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f8e88 00:18:31.737 [2024-11-28 07:27:53.962483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.737 [2024-11-28 07:27:53.962514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:31.737 [2024-11-28 07:27:53.975037] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f8a50 00:18:31.737 [2024-11-28 07:27:53.976239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.737 [2024-11-28 07:27:53.976273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:31.737 [2024-11-28 07:27:53.989910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f8618 00:18:31.737 [2024-11-28 07:27:53.990877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.737 [2024-11-28 07:27:53.990907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:31.737 [2024-11-28 07:27:54.003354] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f81e0 00:18:31.737 [2024-11-28 07:27:54.004312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.737 [2024-11-28 07:27:54.004353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:31.996 [2024-11-28 07:27:54.018121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f7da8 00:18:31.996 [2024-11-28 07:27:54.019050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.996 [2024-11-28 07:27:54.019083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:31.996 [2024-11-28 07:27:54.031596] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f7970 00:18:31.996 [2024-11-28 07:27:54.032632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.996 [2024-11-28 07:27:54.032663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:31.996 [2024-11-28 07:27:54.045144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f7538 00:18:31.996 [2024-11-28 07:27:54.046060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.996 [2024-11-28 07:27:54.046092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:31.996 [2024-11-28 07:27:54.058456] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f7100 00:18:31.997 [2024-11-28 07:27:54.059349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.059388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.071726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f6cc8 00:18:31.997 [2024-11-28 07:27:54.072688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.072719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.085561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f6890 00:18:31.997 [2024-11-28 07:27:54.086524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.086556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.099654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f6458 00:18:31.997 [2024-11-28 07:27:54.100647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.100679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.113944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f6020 00:18:31.997 [2024-11-28 07:27:54.114841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.114887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.127873] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f5be8 00:18:31.997 [2024-11-28 07:27:54.128834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.128867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.141566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f57b0 00:18:31.997 [2024-11-28 07:27:54.142427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.142459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.155414] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f5378 00:18:31.997 [2024-11-28 07:27:54.156334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.156373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.169187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f4f40 00:18:31.997 [2024-11-28 07:27:54.170111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.170157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.183557] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f4b08 00:18:31.997 [2024-11-28 07:27:54.184427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.184460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.197657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f46d0 00:18:31.997 [2024-11-28 07:27:54.198493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.198540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.212691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f4298 00:18:31.997 [2024-11-28 07:27:54.213587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.213619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.227595] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f3e60 00:18:31.997 [2024-11-28 07:27:54.228616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.228678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.241869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f3a28 00:18:31.997 [2024-11-28 07:27:54.242688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.242719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.255484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f35f0 00:18:31.997 [2024-11-28 07:27:54.256347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.256379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:31.997 [2024-11-28 07:27:54.270057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f31b8 00:18:31.997 [2024-11-28 07:27:54.270947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.997 [2024-11-28 07:27:54.270981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.285704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f2d80 00:18:32.257 [2024-11-28 07:27:54.286481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.286513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.300922] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f2948 00:18:32.257 [2024-11-28 07:27:54.301695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.301727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.314615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f2510 00:18:32.257 [2024-11-28 07:27:54.315385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.315416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.328491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f20d8 00:18:32.257 [2024-11-28 07:27:54.329272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.329304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.343485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f1ca0 00:18:32.257 [2024-11-28 07:27:54.344474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.344509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.358713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f1868 00:18:32.257 [2024-11-28 07:27:54.359490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.359523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.373197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f1430 00:18:32.257 [2024-11-28 07:27:54.373938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.373969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.387438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f0ff8 00:18:32.257 [2024-11-28 07:27:54.388266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.388300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.401679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f0bc0 00:18:32.257 [2024-11-28 07:27:54.402434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.402467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.415756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f0788 00:18:32.257 [2024-11-28 07:27:54.416588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.416620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.429883] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190f0350 00:18:32.257 [2024-11-28 07:27:54.430707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.430738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.444144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190eff18 00:18:32.257 [2024-11-28 07:27:54.444904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.444936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.458097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190efae0 00:18:32.257 [2024-11-28 07:27:54.458904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.257 [2024-11-28 07:27:54.458936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:32.257 [2024-11-28 07:27:54.472230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ef6a8 00:18:32.257 [2024-11-28 07:27:54.472971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.258 [2024-11-28 07:27:54.473017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:32.258 [2024-11-28 07:27:54.486120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ef270 00:18:32.258 [2024-11-28 07:27:54.486860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.258 [2024-11-28 07:27:54.486893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:32.258 [2024-11-28 07:27:54.500599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190eee38 00:18:32.258 [2024-11-28 07:27:54.501267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.258 [2024-11-28 07:27:54.501298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:32.258 [2024-11-28 07:27:54.514215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190eea00 00:18:32.258 [2024-11-28 07:27:54.514858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.258 [2024-11-28 07:27:54.514890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:32.258 [2024-11-28 07:27:54.527905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ee5c8 00:18:32.258 [2024-11-28 07:27:54.528702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.258 [2024-11-28 07:27:54.528733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.542485] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ee190 00:18:32.518 [2024-11-28 07:27:54.543097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.543129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.556007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190edd58 00:18:32.518 [2024-11-28 07:27:54.556732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.556764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.569738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ed920 00:18:32.518 [2024-11-28 07:27:54.570322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.570377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.583245] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ed4e8 00:18:32.518 [2024-11-28 07:27:54.583844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.583875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.596929] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ed0b0 00:18:32.518 [2024-11-28 07:27:54.597539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.597572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.611772] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ecc78 00:18:32.518 [2024-11-28 07:27:54.612402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.612461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.625640] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ec840 00:18:32.518 [2024-11-28 07:27:54.626219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.626250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.639356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ec408 00:18:32.518 [2024-11-28 07:27:54.639894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.639926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.652974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ebfd0 00:18:32.518 [2024-11-28 07:27:54.653516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.653561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.666549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ebb98 00:18:32.518 [2024-11-28 07:27:54.667092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.667137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.680053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190eb760 00:18:32.518 [2024-11-28 07:27:54.680669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.680703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.693602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190eb328 00:18:32.518 [2024-11-28 07:27:54.694108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.694140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.707108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190eaef0 00:18:32.518 [2024-11-28 07:27:54.707616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.707649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.720653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190eaab8 00:18:32.518 [2024-11-28 07:27:54.721139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.721172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.734092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ea680 00:18:32.518 [2024-11-28 07:27:54.734604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.734637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:32.518 [2024-11-28 07:27:54.747611] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190ea248 00:18:32.518 [2024-11-28 07:27:54.748126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.518 [2024-11-28 07:27:54.748161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:32.519 [2024-11-28 07:27:54.761280] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e9e10 00:18:32.519 [2024-11-28 07:27:54.761751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.519 [2024-11-28 07:27:54.761784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:32.519 [2024-11-28 07:27:54.774826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e99d8 00:18:32.519 [2024-11-28 07:27:54.775275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.519 [2024-11-28 07:27:54.775325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:32.519 [2024-11-28 07:27:54.788759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e95a0 00:18:32.519 [2024-11-28 07:27:54.789225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.519 [2024-11-28 07:27:54.789262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.803251] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e9168 00:18:32.779 [2024-11-28 07:27:54.803699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.803733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.816941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e8d30 00:18:32.779 [2024-11-28 07:27:54.817383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.817415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.830426] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e88f8 00:18:32.779 [2024-11-28 07:27:54.830871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.830903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.843966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e84c0 00:18:32.779 [2024-11-28 07:27:54.844500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.844550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.857500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e8088 00:18:32.779 [2024-11-28 07:27:54.857900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.857933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.872196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e7c50 00:18:32.779 [2024-11-28 07:27:54.872675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.872708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.885687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e7818 00:18:32.779 [2024-11-28 07:27:54.886057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.886101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.899206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e73e0 00:18:32.779 [2024-11-28 07:27:54.899611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.899650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.913018] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e6fa8 00:18:32.779 [2024-11-28 07:27:54.913385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.913414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.926549] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e6b70 00:18:32.779 [2024-11-28 07:27:54.926929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.926961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.940036] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e6738 00:18:32.779 [2024-11-28 07:27:54.940485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.940516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.954001] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e6300 00:18:32.779 [2024-11-28 07:27:54.954348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.779 [2024-11-28 07:27:54.954394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:32.779 [2024-11-28 07:27:54.967571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e5ec8 00:18:32.779 [2024-11-28 07:27:54.967891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.780 [2024-11-28 07:27:54.967919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:32.780 [2024-11-28 07:27:54.981034] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e5a90 00:18:32.780 [2024-11-28 07:27:54.981359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.780 [2024-11-28 07:27:54.981402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:32.780 [2024-11-28 07:27:54.994542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e5658 00:18:32.780 [2024-11-28 07:27:54.994882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.780 [2024-11-28 07:27:54.994914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:32.780 [2024-11-28 07:27:55.008062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e5220 00:18:32.780 [2024-11-28 07:27:55.008469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.780 [2024-11-28 07:27:55.008502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:32.780 [2024-11-28 07:27:55.021721] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e4de8 00:18:32.780 [2024-11-28 07:27:55.022005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.780 [2024-11-28 07:27:55.022035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:32.780 [2024-11-28 07:27:55.035281] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e49b0 00:18:32.780 [2024-11-28 07:27:55.035568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.780 [2024-11-28 07:27:55.035596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:32.780 [2024-11-28 07:27:55.049378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e4578 00:18:32.780 [2024-11-28 07:27:55.049710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:32.780 [2024-11-28 07:27:55.049739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.063955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e4140 00:18:33.040 [2024-11-28 07:27:55.064262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.064312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.077528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e3d08 00:18:33.040 [2024-11-28 07:27:55.077777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.077804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.091116] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e38d0 00:18:33.040 [2024-11-28 07:27:55.091370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.091389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.104737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e3498 00:18:33.040 [2024-11-28 07:27:55.104973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.104992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.118337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e3060 00:18:33.040 [2024-11-28 07:27:55.118628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.118658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.132942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e2c28 00:18:33.040 [2024-11-28 07:27:55.133162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.133182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.146422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e27f0 00:18:33.040 [2024-11-28 07:27:55.146639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.146659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.160150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e23b8 00:18:33.040 [2024-11-28 07:27:55.160376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.160397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.173680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e1f80 00:18:33.040 [2024-11-28 07:27:55.173877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.173897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.187129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e1b48 00:18:33.040 [2024-11-28 07:27:55.187317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.187337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.201337] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e1710 00:18:33.040 [2024-11-28 07:27:55.201527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.201547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.214987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e12d8 00:18:33.040 [2024-11-28 07:27:55.215158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.215178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.229091] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e0ea0 00:18:33.040 [2024-11-28 07:27:55.229259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.229280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.244318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e0a68 00:18:33.040 [2024-11-28 07:27:55.244500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.244536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.259047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e0630 00:18:33.040 [2024-11-28 07:27:55.259193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.259214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.273062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e01f8 00:18:33.040 [2024-11-28 07:27:55.273198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.273217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.286618] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190dfdc0 00:18:33.040 [2024-11-28 07:27:55.286749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.286771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:33.040 [2024-11-28 07:27:55.300115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190df988 00:18:33.040 [2024-11-28 07:27:55.300244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.040 [2024-11-28 07:27:55.300265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:33.299 [2024-11-28 07:27:55.314361] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190df550 00:18:33.299 [2024-11-28 07:27:55.314483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.299 [2024-11-28 07:27:55.314504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:33.299 [2024-11-28 07:27:55.328517] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190df118 00:18:33.299 [2024-11-28 07:27:55.328638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.299 [2024-11-28 07:27:55.328658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:33.299 [2024-11-28 07:27:55.342027] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190dece0 00:18:33.299 [2024-11-28 07:27:55.342122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.299 [2024-11-28 07:27:55.342141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:33.299 [2024-11-28 07:27:55.355472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190de8a8 00:18:33.299 [2024-11-28 07:27:55.355559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.299 [2024-11-28 07:27:55.355579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:33.299 [2024-11-28 07:27:55.369051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190de038 00:18:33.299 [2024-11-28 07:27:55.369131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.299 [2024-11-28 07:27:55.369151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:33.299 [2024-11-28 07:27:55.389412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190de038 00:18:33.299 [2024-11-28 07:27:55.390620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.299 [2024-11-28 07:27:55.390651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:33.299 [2024-11-28 07:27:55.403090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190de470 00:18:33.299 [2024-11-28 07:27:55.404300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.299 [2024-11-28 07:27:55.404361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:33.299 [2024-11-28 07:27:55.416758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190de8a8 00:18:33.299 [2024-11-28 07:27:55.417953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.299 [2024-11-28 07:27:55.417984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:33.299 [2024-11-28 07:27:55.430296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190dece0 00:18:33.300 [2024-11-28 07:27:55.431459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.300 [2024-11-28 07:27:55.431489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:33.300 [2024-11-28 07:27:55.443907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190df118 00:18:33.300 [2024-11-28 07:27:55.445142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.300 [2024-11-28 07:27:55.445173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:33.300 [2024-11-28 07:27:55.457673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190df550 00:18:33.300 [2024-11-28 07:27:55.458830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.300 [2024-11-28 07:27:55.458860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:33.300 [2024-11-28 07:27:55.471323] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190df988 00:18:33.300 [2024-11-28 07:27:55.472539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.300 [2024-11-28 07:27:55.472569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:33.300 [2024-11-28 07:27:55.484897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190dfdc0 00:18:33.300 [2024-11-28 07:27:55.486060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.300 [2024-11-28 07:27:55.486090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:33.300 [2024-11-28 07:27:55.498454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e01f8 00:18:33.300 [2024-11-28 07:27:55.499569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.300 [2024-11-28 07:27:55.499600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:33.300 [2024-11-28 07:27:55.512577] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e0630 00:18:33.300 [2024-11-28 07:27:55.513800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.300 [2024-11-28 07:27:55.513849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:33.300 [2024-11-28 07:27:55.527662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e0a68 00:18:33.300 [2024-11-28 07:27:55.528949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.300 [2024-11-28 07:27:55.528981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:33.300 [2024-11-28 07:27:55.542376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e0ea0 00:18:33.300 [2024-11-28 07:27:55.543561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.300 [2024-11-28 07:27:55.543593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:33.300 [2024-11-28 07:27:55.557126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e12d8 00:18:33.300 [2024-11-28 07:27:55.558285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.300 [2024-11-28 07:27:55.558360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:33.300 [2024-11-28 07:27:55.571630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e1710 00:18:33.300 [2024-11-28 07:27:55.572930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.300 [2024-11-28 07:27:55.572976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:33.559 [2024-11-28 07:27:55.586563] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190e1b48 00:18:33.559 [2024-11-28 07:27:55.587677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:33.559 [2024-11-28 07:27:55.587708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:33.559 00:18:33.559 Latency(us) 00:18:33.559 [2024-11-28T07:27:55.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.559 [2024-11-28T07:27:55.834Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.559 nvme0n1 : 2.01 18097.93 70.70 0.00 0.00 7067.20 6196.13 21448.15 00:18:33.559 [2024-11-28T07:27:55.834Z] =================================================================================================================== 00:18:33.559 [2024-11-28T07:27:55.834Z] Total : 18097.93 70.70 0.00 0.00 7067.20 6196.13 21448.15 00:18:33.559 0 00:18:33.559 07:27:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:33.559 07:27:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:33.559 07:27:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:33.559 | .driver_specific 00:18:33.559 | .nvme_error 00:18:33.559 | .status_code 00:18:33.559 | .command_transient_transport_error' 00:18:33.559 07:27:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:33.818 07:27:55 -- host/digest.sh@71 -- # (( 142 > 0 )) 00:18:33.818 07:27:55 -- host/digest.sh@73 -- # killprocess 84561 00:18:33.818 07:27:55 -- common/autotest_common.sh@936 -- # '[' -z 84561 ']' 00:18:33.818 07:27:55 -- common/autotest_common.sh@940 -- # kill -0 84561 00:18:33.818 07:27:55 -- common/autotest_common.sh@941 -- # uname 00:18:33.818 07:27:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:33.818 07:27:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84561 00:18:33.818 07:27:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:33.818 killing process with pid 84561 00:18:33.818 07:27:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:33.818 07:27:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84561' 00:18:33.818 Received shutdown signal, test time was about 2.000000 seconds 00:18:33.818 00:18:33.818 Latency(us) 00:18:33.818 [2024-11-28T07:27:56.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.818 [2024-11-28T07:27:56.093Z] =================================================================================================================== 00:18:33.818 [2024-11-28T07:27:56.093Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.818 07:27:55 -- common/autotest_common.sh@955 -- # kill 84561 00:18:33.818 07:27:55 -- common/autotest_common.sh@960 -- # wait 84561 00:18:34.077 07:27:56 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:18:34.077 07:27:56 -- host/digest.sh@54 -- # local rw bs qd 00:18:34.077 07:27:56 -- host/digest.sh@56 -- # rw=randwrite 00:18:34.077 07:27:56 -- host/digest.sh@56 -- # bs=131072 00:18:34.077 07:27:56 -- host/digest.sh@56 -- # qd=16 00:18:34.077 07:27:56 -- host/digest.sh@58 -- # bperfpid=84621 00:18:34.077 07:27:56 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:34.077 07:27:56 -- host/digest.sh@60 -- # waitforlisten 84621 /var/tmp/bperf.sock 00:18:34.077 07:27:56 -- common/autotest_common.sh@829 -- # '[' -z 84621 ']' 00:18:34.077 07:27:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:34.077 07:27:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:34.077 07:27:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:34.077 07:27:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.077 07:27:56 -- common/autotest_common.sh@10 -- # set +x 00:18:34.077 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:34.077 Zero copy mechanism will not be used. 00:18:34.077 [2024-11-28 07:27:56.257263] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:34.077 [2024-11-28 07:27:56.257392] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84621 ] 00:18:34.336 [2024-11-28 07:27:56.387562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.336 [2024-11-28 07:27:56.480024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.273 07:27:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.273 07:27:57 -- common/autotest_common.sh@862 -- # return 0 00:18:35.273 07:27:57 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:35.273 07:27:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:35.273 07:27:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:35.274 07:27:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.274 07:27:57 -- common/autotest_common.sh@10 -- # set +x 00:18:35.274 07:27:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.274 07:27:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:35.274 07:27:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:35.843 nvme0n1 00:18:35.843 07:27:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:35.843 07:27:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.843 07:27:57 -- common/autotest_common.sh@10 -- # set +x 00:18:35.843 07:27:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.843 07:27:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:35.843 07:27:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:35.843 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:35.843 Zero copy mechanism will not be used. 00:18:35.843 Running I/O for 2 seconds... 00:18:35.843 [2024-11-28 07:27:58.018461] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.843 [2024-11-28 07:27:58.018772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.843 [2024-11-28 07:27:58.018811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.843 [2024-11-28 07:27:58.023249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.843 [2024-11-28 07:27:58.023546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.843 [2024-11-28 07:27:58.023576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.843 [2024-11-28 07:27:58.027999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.843 [2024-11-28 07:27:58.028309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.843 [2024-11-28 07:27:58.028351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.843 [2024-11-28 07:27:58.032818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.843 [2024-11-28 07:27:58.033071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.843 [2024-11-28 07:27:58.033100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.843 [2024-11-28 07:27:58.037452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.843 [2024-11-28 07:27:58.037713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.843 [2024-11-28 07:27:58.037741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.843 [2024-11-28 07:27:58.042107] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.843 [2024-11-28 07:27:58.042390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.843 [2024-11-28 07:27:58.042418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.843 [2024-11-28 07:27:58.046753] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.047005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.047032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.051405] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.051661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.051688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.055964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.056271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.056300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.060738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.061001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.061032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.065479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.065740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.065769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.070077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.070376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.070404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.074979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.075234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.075261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.079631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.079889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.079916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.084316] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.084608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.084636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.089003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.089257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.089278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.093731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.094006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.094028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.098374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.098633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.098671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.103144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.103475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.103505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.107866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.108207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.108239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:35.844 [2024-11-28 07:27:58.112736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:35.844 [2024-11-28 07:27:58.113076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.844 [2024-11-28 07:27:58.113106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.117946] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.118270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.118301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.123187] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.123510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.123545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.127884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.128222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.128255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.132754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.133065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.133090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.137528] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.137842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.137866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.142232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.142557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.142588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.147010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.147323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.147346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.151750] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.152062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.152132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.156663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.156976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.157006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.161533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.161848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.161879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.166345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.166660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.166690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.171147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.171475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.171506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.175935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.176267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.176299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.180831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.181144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.181174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.185598] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.185911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.185940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.190455] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.190768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.190798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.195179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.195507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.195537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.199917] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.200257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.200288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.204819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.205130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.205156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.209603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.209918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.209950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.214489] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.214804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.214835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.219183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.219506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.219545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.224041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.224381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.224412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.228987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.229306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.229345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.234167] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.106 [2024-11-28 07:27:58.234504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.106 [2024-11-28 07:27:58.234534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.106 [2024-11-28 07:27:58.239168] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.239493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.239523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.243889] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.244223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.244257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.248738] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.249040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.249063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.253576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.253888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.253917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.258335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.258651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.258681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.263062] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.263411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.263440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.267866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.268218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.268249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.272835] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.273152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.273176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.277707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.278020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.278050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.282544] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.282861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.282891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.287407] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.287717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.287747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.292186] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.292504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.292550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.297119] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.297452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.297482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.302129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.302421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.302466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.307030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.307359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.307409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.311902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.312243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.312273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.316758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.317071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.317102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.321614] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.321925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.321963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.326401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.326715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.326745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.331130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.331452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.331482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.335945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.336294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.336319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.340861] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.341176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.341206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.345663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.345975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.346005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.350438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.350755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.350784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.355162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.355508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.355537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.359964] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.360319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.360359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.364896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.107 [2024-11-28 07:27:58.365219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.107 [2024-11-28 07:27:58.365249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.107 [2024-11-28 07:27:58.369755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.108 [2024-11-28 07:27:58.370060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.108 [2024-11-28 07:27:58.370090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.108 [2024-11-28 07:27:58.374788] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.108 [2024-11-28 07:27:58.375063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.108 [2024-11-28 07:27:58.375092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.369 [2024-11-28 07:27:58.379885] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.369 [2024-11-28 07:27:58.380257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.380319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.384902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.385197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.385221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.389644] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.389948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.389978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.394368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.394676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.394708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.399047] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.399349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.399389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.403696] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.404002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.404031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.408396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.408726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.408754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.413067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.413373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.413411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.417855] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.418148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.418174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.422562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.422865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.422895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.427210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.427526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.427555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.431807] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.432146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.432176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.436551] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.436856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.436885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.441309] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.441615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.441644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.445993] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.446297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.446334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.450602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.450907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.450936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.455249] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.455577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.455607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.459902] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.460239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.460269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.464650] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.464956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.464986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.469304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.469621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.469649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.473991] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.370 [2024-11-28 07:27:58.474294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.370 [2024-11-28 07:27:58.474331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.370 [2024-11-28 07:27:58.478662] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.478966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.478995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.483331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.483638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.483667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.488008] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.488356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.488385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.493068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.493361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.493405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.497955] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.498260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.498289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.502617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.502922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.502951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.507288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.507607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.507635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.511916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.512252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.512282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.516609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.516912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.516942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.521247] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.521600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.521630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.525962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.526269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.526298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.530571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.530875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.530905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.535210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.535526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.535555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.539797] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.540129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.540159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.544498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.544819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.544855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.549198] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.549516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.549545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.553818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.554123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.554152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.558472] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.558780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.558809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.563115] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.563437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.563464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.567794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.568117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.568146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.572538] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.371 [2024-11-28 07:27:58.572844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.371 [2024-11-28 07:27:58.572872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.371 [2024-11-28 07:27:58.577214] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.577531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.577561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.581798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.582103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.582132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.586365] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.586675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.586708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.591015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.591317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.591363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.595590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.595896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.595925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.600248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.600588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.600617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.604924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.605230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.605261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.609632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.609937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.609966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.614210] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.614524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.614553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.618848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.619152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.619182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.623498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.623802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.623832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.628049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.628428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.628472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.632824] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.633130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.633159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.372 [2024-11-28 07:27:58.637625] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.372 [2024-11-28 07:27:58.637931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.372 [2024-11-28 07:27:58.637961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.642597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.634 [2024-11-28 07:27:58.642926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-28 07:27:58.642955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.647389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.634 [2024-11-28 07:27:58.647696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-28 07:27:58.647725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.652375] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.634 [2024-11-28 07:27:58.652716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-28 07:27:58.652745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.657053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.634 [2024-11-28 07:27:58.657359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-28 07:27:58.657397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.661744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.634 [2024-11-28 07:27:58.662050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-28 07:27:58.662078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.666471] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.634 [2024-11-28 07:27:58.666775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-28 07:27:58.666803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.671087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.634 [2024-11-28 07:27:58.671407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-28 07:27:58.671446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.675731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.634 [2024-11-28 07:27:58.676033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-28 07:27:58.676066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.680452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.634 [2024-11-28 07:27:58.680772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-28 07:27:58.680801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.685117] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.634 [2024-11-28 07:27:58.685425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-28 07:27:58.685454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.689907] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.634 [2024-11-28 07:27:58.690223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.634 [2024-11-28 07:27:58.690253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.634 [2024-11-28 07:27:58.694776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.695081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.695136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.699576] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.699882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.699911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.704513] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.704836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.704864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.709396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.709716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.709745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.714092] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.714415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.714446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.718741] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.719044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.719072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.723439] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.723742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.723773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.728177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.728517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.728546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.732884] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.733188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.733216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.737511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.737817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.737847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.742209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.742517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.742545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.746839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.747145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.747174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.751801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.752154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.752183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.756936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.757241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.757270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.761556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.761865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.761894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.766136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.766454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.766483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.770732] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.771040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.771076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.775368] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.775672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.775701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.779969] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.780304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.780350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.784692] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.784996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.785025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.789503] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.789810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.635 [2024-11-28 07:27:58.789838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.635 [2024-11-28 07:27:58.794205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.635 [2024-11-28 07:27:58.794512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.794541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.798910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.799218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.799248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.803516] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.803820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.803849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.808272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.808627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.808657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.812924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.813226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.813256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.817582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.817887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.817916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.822230] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.822563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.822593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.826816] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.827120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.827150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.831428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.831734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.831763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.835986] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.836333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.836372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.840718] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.841021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.841050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.845373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.845678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.845716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.849972] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.850276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.850316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.854561] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.854863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.854892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.859199] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.859524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.859554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.863799] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.864146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.864192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.868548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.868854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.868883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.873238] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.873556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.873585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.877845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.878148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.878178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.882418] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.882723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.882751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.886984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.887289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.887327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.891811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.636 [2024-11-28 07:27:58.892206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.636 [2024-11-28 07:27:58.892243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.636 [2024-11-28 07:27:58.896701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.637 [2024-11-28 07:27:58.897007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.637 [2024-11-28 07:27:58.897037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.637 [2024-11-28 07:27:58.901403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.637 [2024-11-28 07:27:58.901710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.637 [2024-11-28 07:27:58.901739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.637 [2024-11-28 07:27:58.906330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.637 [2024-11-28 07:27:58.906666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.637 [2024-11-28 07:27:58.906695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.911051] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.911367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.911395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.915981] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.916326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.916375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.920800] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.921103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.921133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.925451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.925756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.925785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.930102] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.930419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.930448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.934756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.935059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.935088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.939450] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.939754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.939783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.944161] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.944498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.944542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.948849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.949156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.949185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.953519] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.953822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.953851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.958121] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.958436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.958463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.962695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.963002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.963030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.967284] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.967599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.967628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.971848] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.972210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.972240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.976654] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.976958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.976987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.981483] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.981791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.981821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.986192] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.986508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.986537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.990910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.991232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.991261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:58.995802] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.898 [2024-11-28 07:27:58.996150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.898 [2024-11-28 07:27:58.996181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.898 [2024-11-28 07:27:59.000702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.001004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.001035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.005288] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.005619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.005648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.010371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.010700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.010730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.015270] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.015586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.015615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.019927] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.020267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.020291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.024818] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.025124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.025153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.029599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.029903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.029932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.034240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.034569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.034592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.039082] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.039428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.039456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.043965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.044295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.044347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.048879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.049181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.049210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.053713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.054027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.054056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.058480] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.058797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.058827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.063169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.063497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.063526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.067862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.068220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.068252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.072631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.072956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.072987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.077390] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.077677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.077719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.082065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.082381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.082405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.086678] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.086983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.087012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.091330] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.091634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.091662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.095979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.096314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.096355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.100757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.101063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.101096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.105493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.105798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.105829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.110177] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.110491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.110520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.114796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.115102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.115131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.119435] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.119739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.119767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.124172] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.899 [2024-11-28 07:27:59.124508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.899 [2024-11-28 07:27:59.124551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.899 [2024-11-28 07:27:59.128817] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.900 [2024-11-28 07:27:59.129123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.900 [2024-11-28 07:27:59.129161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.900 [2024-11-28 07:27:59.133543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.900 [2024-11-28 07:27:59.133845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.900 [2024-11-28 07:27:59.133874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.900 [2024-11-28 07:27:59.138235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.900 [2024-11-28 07:27:59.138496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.900 [2024-11-28 07:27:59.138534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.900 [2024-11-28 07:27:59.142744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.900 [2024-11-28 07:27:59.142998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.900 [2024-11-28 07:27:59.143034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.900 [2024-11-28 07:27:59.147159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.900 [2024-11-28 07:27:59.147443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.900 [2024-11-28 07:27:59.147470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.900 [2024-11-28 07:27:59.151708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.900 [2024-11-28 07:27:59.151973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.900 [2024-11-28 07:27:59.151994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:36.900 [2024-11-28 07:27:59.156128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.900 [2024-11-28 07:27:59.156405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.900 [2024-11-28 07:27:59.156473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.900 [2024-11-28 07:27:59.160747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.900 [2024-11-28 07:27:59.160996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.900 [2024-11-28 07:27:59.161035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:36.900 [2024-11-28 07:27:59.165183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.900 [2024-11-28 07:27:59.165464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.900 [2024-11-28 07:27:59.165503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:36.900 [2024-11-28 07:27:59.170194] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:36.900 [2024-11-28 07:27:59.170506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:36.900 [2024-11-28 07:27:59.170560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.175071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.175336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.175383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.179865] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.180169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.180199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.184411] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.184713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.184739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.188938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.189191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.189222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.193479] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.193746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.193781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.198074] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.198360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.198412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.202705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.202954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.202989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.207326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.207606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.207643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.211756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.211995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.212031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.216201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.216479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.216522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.220754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.220993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.221025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.225132] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.225382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.225409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.229374] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.229628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.229656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.233754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.233991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.234013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.238137] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.238403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.238424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.242491] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.242746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.242768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.246999] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.247237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.247257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.251457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.251725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.251757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.255758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.255996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.256017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.260063] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.260356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.260377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.264277] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.264576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.264598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.269057] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.269334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.269406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.273914] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.274162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.274183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.278250] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.278496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.278517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.282584] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.282851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.161 [2024-11-28 07:27:59.282882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.161 [2024-11-28 07:27:59.286998] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.161 [2024-11-28 07:27:59.287258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.287279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.291452] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.291706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.291735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.296312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.296622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.296650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.300769] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.301006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.301033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.305103] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.305380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.305412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.309387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.309640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.309673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.313781] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.314020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.314069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.318206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.318473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.318524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.322693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.322942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.323008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.327260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.327559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.327590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.331742] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.331980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.332001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.336150] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.336423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.336446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.340590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.340866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.340894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.344967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.345203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.345224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.349293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.349542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.349563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.353599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.353836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.353875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.358071] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.358362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.358405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.362648] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.362929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.362962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.367162] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.367439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.367471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.371520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.371805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.371841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.375945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.376224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.376257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.380436] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.380707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.380751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.384915] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.385167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.385194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.389396] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.389661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.389698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.393831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.394091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.394112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.398358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.398608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.398672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.402798] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.162 [2024-11-28 07:27:59.403045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.162 [2024-11-28 07:27:59.403079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.162 [2024-11-28 07:27:59.407195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.163 [2024-11-28 07:27:59.407473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.163 [2024-11-28 07:27:59.407508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.163 [2024-11-28 07:27:59.411647] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.163 [2024-11-28 07:27:59.411912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.163 [2024-11-28 07:27:59.411947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.163 [2024-11-28 07:27:59.416128] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.163 [2024-11-28 07:27:59.416402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.163 [2024-11-28 07:27:59.416462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.163 [2024-11-28 07:27:59.420630] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.163 [2024-11-28 07:27:59.420879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.163 [2024-11-28 07:27:59.420918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.163 [2024-11-28 07:27:59.425029] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.163 [2024-11-28 07:27:59.425276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.163 [2024-11-28 07:27:59.425330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.163 [2024-11-28 07:27:59.429565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.163 [2024-11-28 07:27:59.429851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.163 [2024-11-28 07:27:59.429903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.434581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.434880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.434910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.439201] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.439531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.439561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.444006] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.444317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.444366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.448590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.448839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.448873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.453083] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.453331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.453363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.457499] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.457749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.457779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.461977] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.462228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.462273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.466558] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.466818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.466846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.471124] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.471400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.471432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.475676] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.475940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.475961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.480445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.480714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.480738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.485053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.485320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.485373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.489805] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.490104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.490143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.494695] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.494947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.494979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.499403] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.499711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.499755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.504183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.504540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.504584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.509148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.509449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.509521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.513962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.514271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.514351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.518949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.519219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.519284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.433 [2024-11-28 07:27:59.523936] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.433 [2024-11-28 07:27:59.524254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.433 [2024-11-28 07:27:59.524286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.528776] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.529088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.529138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.533438] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.533688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.533712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.538203] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.538474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.538505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.542705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.542955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.542985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.547294] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.547580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.547618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.551962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.552292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.552335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.556770] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.557020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.557047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.561789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.562106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.562140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.566984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.567316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.567357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.572453] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.572796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.572873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.578216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.578535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.578588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.583501] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.583799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.583830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.588734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.589028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.589082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.593925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.594248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.594283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.599260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.599609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.599659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.604736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.605022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.605072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.610141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.610485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.610530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.615622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.615958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.615991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.620378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.620692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.620739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.625104] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.625360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.625396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.629735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.629987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.630010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.634431] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.634679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.634705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.638945] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.639194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.639228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.643926] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.644252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.644297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.648658] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.648910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.648943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.653235] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.653509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.434 [2024-11-28 07:27:59.653553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.434 [2024-11-28 07:27:59.657778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.434 [2024-11-28 07:27:59.658028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.435 [2024-11-28 07:27:59.658061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.435 [2024-11-28 07:27:59.662290] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.435 [2024-11-28 07:27:59.662552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.435 [2024-11-28 07:27:59.662579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.435 [2024-11-28 07:27:59.666947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.435 [2024-11-28 07:27:59.667217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.435 [2024-11-28 07:27:59.667277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.435 [2024-11-28 07:27:59.671724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.435 [2024-11-28 07:27:59.672021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.435 [2024-11-28 07:27:59.672045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.435 [2024-11-28 07:27:59.676862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.435 [2024-11-28 07:27:59.677202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.435 [2024-11-28 07:27:59.677234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.435 [2024-11-28 07:27:59.682169] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.435 [2024-11-28 07:27:59.682492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.435 [2024-11-28 07:27:59.682524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.435 [2024-11-28 07:27:59.687413] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.435 [2024-11-28 07:27:59.687692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.435 [2024-11-28 07:27:59.687720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.435 [2024-11-28 07:27:59.692565] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.435 [2024-11-28 07:27:59.692909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.435 [2024-11-28 07:27:59.692956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.435 [2024-11-28 07:27:59.697757] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.435 [2024-11-28 07:27:59.698027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.435 [2024-11-28 07:27:59.698087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.702866] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.703142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.703180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.707944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.708314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.708355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.713171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.713504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.713561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.718043] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.718326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.718373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.722943] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.723239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.723299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.728014] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.728381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.728429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.732791] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.733056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.733088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.737720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.737976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.738013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.742587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.742864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.742932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.747237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.747497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.747550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.751759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.752007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.752050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.756626] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.756982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.757018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.761236] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.761500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.761552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.766136] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.766391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.766428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.771080] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.771408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.771452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.775806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.776054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.776144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.780792] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.781042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.781070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.785535] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.785776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.785803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.790138] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.790412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.790478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.794747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.710 [2024-11-28 07:27:59.794984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.710 [2024-11-28 07:27:59.795011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.710 [2024-11-28 07:27:59.799131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.799402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.799429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.803938] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.804238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.804267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.808609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.808865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.808937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.813293] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.813618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.813653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.818069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.818361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.818415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.822801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.823053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.823075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.827469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.827709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.827731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.831949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.832244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.832274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.836534] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.836809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.836852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.841222] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.841500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.841544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.846382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.846705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.846750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.851789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.852188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.852239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.856670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.856924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.856951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.861208] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.861485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.861529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.865744] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.865981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.866001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.870175] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.870423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.870443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.874507] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.874765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.874792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.878863] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.879102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.879124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.883159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.883406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.883426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.887518] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.887754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.887776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.891743] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.891979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.892001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.896015] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.896312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.896350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.900329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.900626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.900653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.904754] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.904992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.905013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.909076] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.909312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.909332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.913419] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.913661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.913682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.917764] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.711 [2024-11-28 07:27:59.918007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.711 [2024-11-28 07:27:59.918029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.711 [2024-11-28 07:27:59.922079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.922330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.922367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.926602] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.926854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.926902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.930869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.931106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.931127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.935171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.935421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.935456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.939487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.939725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.939746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.943713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.943950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.943972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.948070] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.948383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.948411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.952367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.952635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.952663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.956631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.956884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.956932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.960887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.961124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.961145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.965141] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.965393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.965418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.969415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.969654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.969676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.973657] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.973894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.973916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.977966] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.978206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.978228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.712 [2024-11-28 07:27:59.982838] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.712 [2024-11-28 07:27:59.983157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.712 [2024-11-28 07:27:59.983194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.972 [2024-11-28 07:27:59.987434] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.972 [2024-11-28 07:27:59.987674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.972 [2024-11-28 07:27:59.987696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.972 [2024-11-28 07:27:59.992133] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.972 [2024-11-28 07:27:59.992454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.972 [2024-11-28 07:27:59.992486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.972 [2024-11-28 07:27:59.996560] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.972 [2024-11-28 07:27:59.996818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.972 [2024-11-28 07:27:59.996878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:37.972 [2024-11-28 07:28:00.001060] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.972 [2024-11-28 07:28:00.001306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.972 [2024-11-28 07:28:00.001359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:37.972 [2024-11-28 07:28:00.005416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.972 [2024-11-28 07:28:00.005672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.972 [2024-11-28 07:28:00.005730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:37.972 [2024-11-28 07:28:00.009859] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23dad30) with pdu=0x2000190fef90 00:18:37.972 [2024-11-28 07:28:00.010097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:37.972 [2024-11-28 07:28:00.010118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.972 00:18:37.972 Latency(us) 00:18:37.972 [2024-11-28T07:28:00.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.972 [2024-11-28T07:28:00.247Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:37.972 nvme0n1 : 2.00 6590.67 823.83 0.00 0.00 2422.81 1534.14 5928.03 00:18:37.972 [2024-11-28T07:28:00.247Z] =================================================================================================================== 00:18:37.972 [2024-11-28T07:28:00.247Z] Total : 6590.67 823.83 0.00 0.00 2422.81 1534.14 5928.03 00:18:37.972 0 00:18:37.972 07:28:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:37.972 07:28:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:37.972 | .driver_specific 00:18:37.972 | .nvme_error 00:18:37.972 | .status_code 00:18:37.972 | .command_transient_transport_error' 00:18:37.972 07:28:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:37.972 07:28:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:38.232 07:28:00 -- host/digest.sh@71 -- # (( 425 > 0 )) 00:18:38.232 07:28:00 -- host/digest.sh@73 -- # killprocess 84621 00:18:38.232 07:28:00 -- common/autotest_common.sh@936 -- # '[' -z 84621 ']' 00:18:38.232 07:28:00 -- common/autotest_common.sh@940 -- # kill -0 84621 00:18:38.232 07:28:00 -- common/autotest_common.sh@941 -- # uname 00:18:38.232 07:28:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:38.232 07:28:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84621 00:18:38.232 07:28:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:38.232 killing process with pid 84621 00:18:38.232 07:28:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:38.232 07:28:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84621' 00:18:38.232 Received shutdown signal, test time was about 2.000000 seconds 00:18:38.232 00:18:38.232 Latency(us) 00:18:38.232 [2024-11-28T07:28:00.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.232 [2024-11-28T07:28:00.507Z] =================================================================================================================== 00:18:38.232 [2024-11-28T07:28:00.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.232 07:28:00 -- common/autotest_common.sh@955 -- # kill 84621 00:18:38.232 07:28:00 -- common/autotest_common.sh@960 -- # wait 84621 00:18:38.491 07:28:00 -- host/digest.sh@115 -- # killprocess 84408 00:18:38.491 07:28:00 -- common/autotest_common.sh@936 -- # '[' -z 84408 ']' 00:18:38.491 07:28:00 -- common/autotest_common.sh@940 -- # kill -0 84408 00:18:38.491 07:28:00 -- common/autotest_common.sh@941 -- # uname 00:18:38.491 07:28:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:38.491 07:28:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84408 00:18:38.491 07:28:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:38.491 07:28:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:38.491 killing process with pid 84408 00:18:38.491 07:28:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84408' 00:18:38.491 07:28:00 -- common/autotest_common.sh@955 -- # kill 84408 00:18:38.491 07:28:00 -- common/autotest_common.sh@960 -- # wait 84408 00:18:38.750 00:18:38.750 real 0m19.035s 00:18:38.750 user 0m36.770s 00:18:38.750 sys 0m5.133s 00:18:38.750 07:28:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:38.750 ************************************ 00:18:38.750 END TEST nvmf_digest_error 00:18:38.750 ************************************ 00:18:38.750 07:28:00 -- common/autotest_common.sh@10 -- # set +x 00:18:38.750 07:28:00 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:18:38.750 07:28:00 -- host/digest.sh@139 -- # nvmftestfini 00:18:38.750 07:28:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:38.750 07:28:00 -- nvmf/common.sh@116 -- # sync 00:18:38.750 07:28:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:38.750 07:28:00 -- nvmf/common.sh@119 -- # set +e 00:18:38.750 07:28:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:38.750 07:28:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:38.750 rmmod nvme_tcp 00:18:38.750 rmmod nvme_fabrics 00:18:38.750 rmmod nvme_keyring 00:18:38.750 07:28:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:38.750 07:28:00 -- nvmf/common.sh@123 -- # set -e 00:18:38.750 07:28:01 -- nvmf/common.sh@124 -- # return 0 00:18:38.750 07:28:01 -- nvmf/common.sh@477 -- # '[' -n 84408 ']' 00:18:38.750 07:28:01 -- nvmf/common.sh@478 -- # killprocess 84408 00:18:38.750 07:28:01 -- common/autotest_common.sh@936 -- # '[' -z 84408 ']' 00:18:38.750 07:28:01 -- common/autotest_common.sh@940 -- # kill -0 84408 00:18:38.750 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (84408) - No such process 00:18:38.750 Process with pid 84408 is not found 00:18:38.750 07:28:01 -- common/autotest_common.sh@963 -- # echo 'Process with pid 84408 is not found' 00:18:38.750 07:28:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:38.750 07:28:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:38.750 07:28:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:38.750 07:28:01 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:38.750 07:28:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:38.750 07:28:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.750 07:28:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.750 07:28:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.009 07:28:01 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:39.009 00:18:39.009 real 0m39.049s 00:18:39.009 user 1m13.923s 00:18:39.009 sys 0m10.436s 00:18:39.009 07:28:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:39.009 07:28:01 -- common/autotest_common.sh@10 -- # set +x 00:18:39.009 ************************************ 00:18:39.009 END TEST nvmf_digest 00:18:39.009 ************************************ 00:18:39.009 07:28:01 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:18:39.009 07:28:01 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:18:39.009 07:28:01 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:39.009 07:28:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:39.009 07:28:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.009 07:28:01 -- common/autotest_common.sh@10 -- # set +x 00:18:39.009 ************************************ 00:18:39.009 START TEST nvmf_multipath 00:18:39.009 ************************************ 00:18:39.009 07:28:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:39.009 * Looking for test storage... 00:18:39.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:39.009 07:28:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:39.009 07:28:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:39.009 07:28:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:39.009 07:28:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:39.009 07:28:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:39.009 07:28:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:39.009 07:28:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:39.009 07:28:01 -- scripts/common.sh@335 -- # IFS=.-: 00:18:39.009 07:28:01 -- scripts/common.sh@335 -- # read -ra ver1 00:18:39.009 07:28:01 -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.009 07:28:01 -- scripts/common.sh@336 -- # read -ra ver2 00:18:39.009 07:28:01 -- scripts/common.sh@337 -- # local 'op=<' 00:18:39.009 07:28:01 -- scripts/common.sh@339 -- # ver1_l=2 00:18:39.009 07:28:01 -- scripts/common.sh@340 -- # ver2_l=1 00:18:39.009 07:28:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:39.009 07:28:01 -- scripts/common.sh@343 -- # case "$op" in 00:18:39.009 07:28:01 -- scripts/common.sh@344 -- # : 1 00:18:39.009 07:28:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:39.009 07:28:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.009 07:28:01 -- scripts/common.sh@364 -- # decimal 1 00:18:39.009 07:28:01 -- scripts/common.sh@352 -- # local d=1 00:18:39.009 07:28:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.009 07:28:01 -- scripts/common.sh@354 -- # echo 1 00:18:39.009 07:28:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:39.009 07:28:01 -- scripts/common.sh@365 -- # decimal 2 00:18:39.009 07:28:01 -- scripts/common.sh@352 -- # local d=2 00:18:39.009 07:28:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.009 07:28:01 -- scripts/common.sh@354 -- # echo 2 00:18:39.009 07:28:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:39.009 07:28:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:39.009 07:28:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:39.009 07:28:01 -- scripts/common.sh@367 -- # return 0 00:18:39.009 07:28:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.009 07:28:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:39.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.009 --rc genhtml_branch_coverage=1 00:18:39.009 --rc genhtml_function_coverage=1 00:18:39.009 --rc genhtml_legend=1 00:18:39.009 --rc geninfo_all_blocks=1 00:18:39.009 --rc geninfo_unexecuted_blocks=1 00:18:39.009 00:18:39.009 ' 00:18:39.009 07:28:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:39.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.009 --rc genhtml_branch_coverage=1 00:18:39.009 --rc genhtml_function_coverage=1 00:18:39.009 --rc genhtml_legend=1 00:18:39.009 --rc geninfo_all_blocks=1 00:18:39.009 --rc geninfo_unexecuted_blocks=1 00:18:39.009 00:18:39.009 ' 00:18:39.009 07:28:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:39.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.009 --rc genhtml_branch_coverage=1 00:18:39.009 --rc genhtml_function_coverage=1 00:18:39.009 --rc genhtml_legend=1 00:18:39.009 --rc geninfo_all_blocks=1 00:18:39.009 --rc geninfo_unexecuted_blocks=1 00:18:39.009 00:18:39.009 ' 00:18:39.009 07:28:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:39.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.009 --rc genhtml_branch_coverage=1 00:18:39.009 --rc genhtml_function_coverage=1 00:18:39.009 --rc genhtml_legend=1 00:18:39.009 --rc geninfo_all_blocks=1 00:18:39.009 --rc geninfo_unexecuted_blocks=1 00:18:39.009 00:18:39.009 ' 00:18:39.009 07:28:01 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:39.009 07:28:01 -- nvmf/common.sh@7 -- # uname -s 00:18:39.009 07:28:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.269 07:28:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.269 07:28:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.269 07:28:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.269 07:28:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.269 07:28:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.269 07:28:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.269 07:28:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.269 07:28:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.269 07:28:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.269 07:28:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:18:39.269 07:28:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:18:39.269 07:28:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.269 07:28:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.269 07:28:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:39.269 07:28:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:39.269 07:28:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.269 07:28:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.269 07:28:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.269 07:28:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.269 07:28:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.269 07:28:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.269 07:28:01 -- paths/export.sh@5 -- # export PATH 00:18:39.269 07:28:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.269 07:28:01 -- nvmf/common.sh@46 -- # : 0 00:18:39.269 07:28:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:39.269 07:28:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:39.269 07:28:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:39.269 07:28:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.269 07:28:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.269 07:28:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:39.269 07:28:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:39.269 07:28:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:39.269 07:28:01 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:39.269 07:28:01 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:39.269 07:28:01 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.269 07:28:01 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:39.269 07:28:01 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:39.269 07:28:01 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:39.269 07:28:01 -- host/multipath.sh@30 -- # nvmftestinit 00:18:39.269 07:28:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:39.269 07:28:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.269 07:28:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:39.269 07:28:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:39.269 07:28:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:39.269 07:28:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.269 07:28:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.269 07:28:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.269 07:28:01 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:39.269 07:28:01 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:39.269 07:28:01 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:39.269 07:28:01 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:39.269 07:28:01 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:39.269 07:28:01 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:39.269 07:28:01 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.269 07:28:01 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.269 07:28:01 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:39.269 07:28:01 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:39.269 07:28:01 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:39.269 07:28:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:39.269 07:28:01 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:39.269 07:28:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.269 07:28:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:39.269 07:28:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:39.269 07:28:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:39.269 07:28:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:39.269 07:28:01 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:39.269 07:28:01 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:39.269 Cannot find device "nvmf_tgt_br" 00:18:39.269 07:28:01 -- nvmf/common.sh@154 -- # true 00:18:39.269 07:28:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:39.269 Cannot find device "nvmf_tgt_br2" 00:18:39.269 07:28:01 -- nvmf/common.sh@155 -- # true 00:18:39.269 07:28:01 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:39.269 07:28:01 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:39.269 Cannot find device "nvmf_tgt_br" 00:18:39.269 07:28:01 -- nvmf/common.sh@157 -- # true 00:18:39.269 07:28:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:39.269 Cannot find device "nvmf_tgt_br2" 00:18:39.269 07:28:01 -- nvmf/common.sh@158 -- # true 00:18:39.269 07:28:01 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:39.269 07:28:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:39.269 07:28:01 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:39.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:39.269 07:28:01 -- nvmf/common.sh@161 -- # true 00:18:39.269 07:28:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:39.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:39.269 07:28:01 -- nvmf/common.sh@162 -- # true 00:18:39.269 07:28:01 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:39.269 07:28:01 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:39.269 07:28:01 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:39.269 07:28:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:39.269 07:28:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:39.269 07:28:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:39.269 07:28:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:39.269 07:28:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:39.269 07:28:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:39.269 07:28:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:39.269 07:28:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:39.270 07:28:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:39.528 07:28:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:39.528 07:28:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:39.528 07:28:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:39.528 07:28:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:39.528 07:28:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:39.528 07:28:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:39.528 07:28:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:39.528 07:28:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:39.528 07:28:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:39.528 07:28:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:39.528 07:28:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:39.528 07:28:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:39.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:18:39.529 00:18:39.529 --- 10.0.0.2 ping statistics --- 00:18:39.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.529 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:18:39.529 07:28:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:39.529 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:39.529 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:39.529 00:18:39.529 --- 10.0.0.3 ping statistics --- 00:18:39.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.529 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:39.529 07:28:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:39.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:39.529 00:18:39.529 --- 10.0.0.1 ping statistics --- 00:18:39.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.529 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:39.529 07:28:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.529 07:28:01 -- nvmf/common.sh@421 -- # return 0 00:18:39.529 07:28:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:39.529 07:28:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.529 07:28:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:39.529 07:28:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:39.529 07:28:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.529 07:28:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:39.529 07:28:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:39.529 07:28:01 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:39.529 07:28:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:39.529 07:28:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:39.529 07:28:01 -- common/autotest_common.sh@10 -- # set +x 00:18:39.529 07:28:01 -- nvmf/common.sh@469 -- # nvmfpid=84902 00:18:39.529 07:28:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:39.529 07:28:01 -- nvmf/common.sh@470 -- # waitforlisten 84902 00:18:39.529 07:28:01 -- common/autotest_common.sh@829 -- # '[' -z 84902 ']' 00:18:39.529 07:28:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.529 07:28:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.529 07:28:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.529 07:28:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.529 07:28:01 -- common/autotest_common.sh@10 -- # set +x 00:18:39.529 [2024-11-28 07:28:01.729393] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:39.529 [2024-11-28 07:28:01.729499] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.787 [2024-11-28 07:28:01.869584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:39.787 [2024-11-28 07:28:01.935026] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:39.787 [2024-11-28 07:28:01.935492] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.787 [2024-11-28 07:28:01.935611] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.787 [2024-11-28 07:28:01.935696] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.787 [2024-11-28 07:28:01.935963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.787 [2024-11-28 07:28:01.935971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.723 07:28:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.723 07:28:02 -- common/autotest_common.sh@862 -- # return 0 00:18:40.723 07:28:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:40.723 07:28:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:40.723 07:28:02 -- common/autotest_common.sh@10 -- # set +x 00:18:40.723 07:28:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.723 07:28:02 -- host/multipath.sh@33 -- # nvmfapp_pid=84902 00:18:40.723 07:28:02 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:40.983 [2024-11-28 07:28:03.116878] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.983 07:28:03 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:41.242 Malloc0 00:18:41.242 07:28:03 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:41.501 07:28:03 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:41.760 07:28:03 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.019 [2024-11-28 07:28:04.163425] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.019 07:28:04 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:42.279 [2024-11-28 07:28:04.455570] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:42.279 07:28:04 -- host/multipath.sh@44 -- # bdevperf_pid=84958 00:18:42.279 07:28:04 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:42.279 07:28:04 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.279 07:28:04 -- host/multipath.sh@47 -- # waitforlisten 84958 /var/tmp/bdevperf.sock 00:18:42.279 07:28:04 -- common/autotest_common.sh@829 -- # '[' -z 84958 ']' 00:18:42.279 07:28:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.279 07:28:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.279 07:28:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.279 07:28:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.279 07:28:04 -- common/autotest_common.sh@10 -- # set +x 00:18:43.215 07:28:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.215 07:28:05 -- common/autotest_common.sh@862 -- # return 0 00:18:43.215 07:28:05 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:43.474 07:28:05 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:44.042 Nvme0n1 00:18:44.042 07:28:06 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:44.042 Nvme0n1 00:18:44.301 07:28:06 -- host/multipath.sh@78 -- # sleep 1 00:18:44.301 07:28:06 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:45.239 07:28:07 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:45.239 07:28:07 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:45.499 07:28:07 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:45.758 07:28:07 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:45.758 07:28:07 -- host/multipath.sh@65 -- # dtrace_pid=85003 00:18:45.758 07:28:07 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84902 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:45.758 07:28:07 -- host/multipath.sh@66 -- # sleep 6 00:18:52.329 07:28:13 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:52.329 07:28:13 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:52.329 07:28:14 -- host/multipath.sh@67 -- # active_port=4421 00:18:52.329 07:28:14 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.329 Attaching 4 probes... 00:18:52.329 @path[10.0.0.2, 4421]: 19756 00:18:52.329 @path[10.0.0.2, 4421]: 19659 00:18:52.329 @path[10.0.0.2, 4421]: 19359 00:18:52.329 @path[10.0.0.2, 4421]: 19643 00:18:52.329 @path[10.0.0.2, 4421]: 19394 00:18:52.329 07:28:14 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:52.329 07:28:14 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:52.329 07:28:14 -- host/multipath.sh@69 -- # sed -n 1p 00:18:52.329 07:28:14 -- host/multipath.sh@69 -- # port=4421 00:18:52.329 07:28:14 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:52.329 07:28:14 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:52.329 07:28:14 -- host/multipath.sh@72 -- # kill 85003 00:18:52.329 07:28:14 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:52.329 07:28:14 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:52.329 07:28:14 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:52.329 07:28:14 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:52.612 07:28:14 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:52.612 07:28:14 -- host/multipath.sh@65 -- # dtrace_pid=85122 00:18:52.612 07:28:14 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84902 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:52.612 07:28:14 -- host/multipath.sh@66 -- # sleep 6 00:18:59.200 07:28:20 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:59.200 07:28:20 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:59.200 07:28:21 -- host/multipath.sh@67 -- # active_port=4420 00:18:59.200 07:28:21 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:59.200 Attaching 4 probes... 00:18:59.200 @path[10.0.0.2, 4420]: 18325 00:18:59.200 @path[10.0.0.2, 4420]: 19261 00:18:59.200 @path[10.0.0.2, 4420]: 19941 00:18:59.200 @path[10.0.0.2, 4420]: 20024 00:18:59.200 @path[10.0.0.2, 4420]: 20222 00:18:59.200 07:28:21 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:59.200 07:28:21 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:59.200 07:28:21 -- host/multipath.sh@69 -- # sed -n 1p 00:18:59.200 07:28:21 -- host/multipath.sh@69 -- # port=4420 00:18:59.200 07:28:21 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:59.200 07:28:21 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:59.200 07:28:21 -- host/multipath.sh@72 -- # kill 85122 00:18:59.200 07:28:21 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:59.200 07:28:21 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:59.200 07:28:21 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:59.200 07:28:21 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:59.460 07:28:21 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:59.460 07:28:21 -- host/multipath.sh@65 -- # dtrace_pid=85233 00:18:59.460 07:28:21 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84902 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:59.460 07:28:21 -- host/multipath.sh@66 -- # sleep 6 00:19:06.026 07:28:27 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:06.027 07:28:27 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:06.027 07:28:27 -- host/multipath.sh@67 -- # active_port=4421 00:19:06.027 07:28:27 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:06.027 Attaching 4 probes... 00:19:06.027 @path[10.0.0.2, 4421]: 16207 00:19:06.027 @path[10.0.0.2, 4421]: 19768 00:19:06.027 @path[10.0.0.2, 4421]: 19477 00:19:06.027 @path[10.0.0.2, 4421]: 19939 00:19:06.027 @path[10.0.0.2, 4421]: 20301 00:19:06.027 07:28:27 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:06.027 07:28:27 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:06.027 07:28:27 -- host/multipath.sh@69 -- # sed -n 1p 00:19:06.027 07:28:27 -- host/multipath.sh@69 -- # port=4421 00:19:06.027 07:28:27 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:06.027 07:28:27 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:06.027 07:28:27 -- host/multipath.sh@72 -- # kill 85233 00:19:06.027 07:28:27 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:06.027 07:28:27 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:06.027 07:28:27 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:06.027 07:28:28 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:06.285 07:28:28 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:06.286 07:28:28 -- host/multipath.sh@65 -- # dtrace_pid=85347 00:19:06.286 07:28:28 -- host/multipath.sh@66 -- # sleep 6 00:19:06.286 07:28:28 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84902 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:12.879 07:28:34 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:12.879 07:28:34 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:12.879 07:28:34 -- host/multipath.sh@67 -- # active_port= 00:19:12.879 07:28:34 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:12.879 Attaching 4 probes... 00:19:12.879 00:19:12.879 00:19:12.879 00:19:12.879 00:19:12.879 00:19:12.879 07:28:34 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:12.879 07:28:34 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:12.879 07:28:34 -- host/multipath.sh@69 -- # sed -n 1p 00:19:12.879 07:28:34 -- host/multipath.sh@69 -- # port= 00:19:12.879 07:28:34 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:12.879 07:28:34 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:12.879 07:28:34 -- host/multipath.sh@72 -- # kill 85347 00:19:12.879 07:28:34 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:12.879 07:28:34 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:12.879 07:28:34 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:12.879 07:28:35 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:13.137 07:28:35 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:13.137 07:28:35 -- host/multipath.sh@65 -- # dtrace_pid=85468 00:19:13.137 07:28:35 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84902 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:13.137 07:28:35 -- host/multipath.sh@66 -- # sleep 6 00:19:19.706 07:28:41 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:19.706 07:28:41 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:19.706 07:28:41 -- host/multipath.sh@67 -- # active_port=4421 00:19:19.706 07:28:41 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:19.706 Attaching 4 probes... 00:19:19.706 @path[10.0.0.2, 4421]: 20066 00:19:19.706 @path[10.0.0.2, 4421]: 20265 00:19:19.706 @path[10.0.0.2, 4421]: 20370 00:19:19.706 @path[10.0.0.2, 4421]: 20621 00:19:19.706 @path[10.0.0.2, 4421]: 20530 00:19:19.706 07:28:41 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:19.706 07:28:41 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:19.706 07:28:41 -- host/multipath.sh@69 -- # sed -n 1p 00:19:19.706 07:28:41 -- host/multipath.sh@69 -- # port=4421 00:19:19.706 07:28:41 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:19.706 07:28:41 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:19.706 07:28:41 -- host/multipath.sh@72 -- # kill 85468 00:19:19.706 07:28:41 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:19.706 07:28:41 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:19.706 [2024-11-28 07:28:41.808885] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809250] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809276] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809285] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809295] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809303] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809326] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809410] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809419] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809459] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809499] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809506] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809534] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809559] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809566] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809589] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809621] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 [2024-11-28 07:28:41.809644] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55d80 is same with the state(5) to be set 00:19:19.706 07:28:41 -- host/multipath.sh@101 -- # sleep 1 00:19:20.642 07:28:42 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:20.642 07:28:42 -- host/multipath.sh@65 -- # dtrace_pid=85591 00:19:20.642 07:28:42 -- host/multipath.sh@66 -- # sleep 6 00:19:20.642 07:28:42 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84902 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:27.209 07:28:48 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:27.209 07:28:48 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:27.209 07:28:49 -- host/multipath.sh@67 -- # active_port=4420 00:19:27.209 07:28:49 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:27.209 Attaching 4 probes... 00:19:27.209 @path[10.0.0.2, 4420]: 20347 00:19:27.209 @path[10.0.0.2, 4420]: 20667 00:19:27.209 @path[10.0.0.2, 4420]: 20599 00:19:27.209 @path[10.0.0.2, 4420]: 21167 00:19:27.209 @path[10.0.0.2, 4420]: 19292 00:19:27.209 07:28:49 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:27.209 07:28:49 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:27.209 07:28:49 -- host/multipath.sh@69 -- # sed -n 1p 00:19:27.209 07:28:49 -- host/multipath.sh@69 -- # port=4420 00:19:27.209 07:28:49 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:27.209 07:28:49 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:27.209 07:28:49 -- host/multipath.sh@72 -- # kill 85591 00:19:27.209 07:28:49 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:27.209 07:28:49 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:27.209 [2024-11-28 07:28:49.397889] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:27.210 07:28:49 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:27.468 07:28:49 -- host/multipath.sh@111 -- # sleep 6 00:19:34.037 07:28:55 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:34.037 07:28:55 -- host/multipath.sh@65 -- # dtrace_pid=85771 00:19:34.037 07:28:55 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84902 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:34.037 07:28:55 -- host/multipath.sh@66 -- # sleep 6 00:19:40.618 07:29:01 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:40.618 07:29:01 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:40.618 07:29:01 -- host/multipath.sh@67 -- # active_port=4421 00:19:40.618 07:29:01 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:40.618 Attaching 4 probes... 00:19:40.618 @path[10.0.0.2, 4421]: 20782 00:19:40.618 @path[10.0.0.2, 4421]: 21123 00:19:40.618 @path[10.0.0.2, 4421]: 21224 00:19:40.618 @path[10.0.0.2, 4421]: 21233 00:19:40.618 @path[10.0.0.2, 4421]: 21268 00:19:40.618 07:29:01 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:40.618 07:29:01 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:40.618 07:29:01 -- host/multipath.sh@69 -- # sed -n 1p 00:19:40.618 07:29:01 -- host/multipath.sh@69 -- # port=4421 00:19:40.618 07:29:01 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:40.618 07:29:01 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:40.618 07:29:01 -- host/multipath.sh@72 -- # kill 85771 00:19:40.618 07:29:01 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:40.618 07:29:01 -- host/multipath.sh@114 -- # killprocess 84958 00:19:40.618 07:29:01 -- common/autotest_common.sh@936 -- # '[' -z 84958 ']' 00:19:40.618 07:29:01 -- common/autotest_common.sh@940 -- # kill -0 84958 00:19:40.618 07:29:01 -- common/autotest_common.sh@941 -- # uname 00:19:40.618 07:29:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:40.618 07:29:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84958 00:19:40.618 killing process with pid 84958 00:19:40.618 07:29:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:40.618 07:29:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:40.618 07:29:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84958' 00:19:40.618 07:29:02 -- common/autotest_common.sh@955 -- # kill 84958 00:19:40.618 07:29:02 -- common/autotest_common.sh@960 -- # wait 84958 00:19:40.618 Connection closed with partial response: 00:19:40.618 00:19:40.618 00:19:40.618 07:29:02 -- host/multipath.sh@116 -- # wait 84958 00:19:40.618 07:29:02 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:40.618 [2024-11-28 07:28:04.528421] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:40.618 [2024-11-28 07:28:04.528555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84958 ] 00:19:40.618 [2024-11-28 07:28:04.664751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.618 [2024-11-28 07:28:04.770950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.618 Running I/O for 90 seconds... 00:19:40.618 [2024-11-28 07:28:14.762883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.618 [2024-11-28 07:28:14.762982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.618 [2024-11-28 07:28:14.763296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.618 [2024-11-28 07:28:14.763389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.618 [2024-11-28 07:28:14.763584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.618 [2024-11-28 07:28:14.763622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.618 [2024-11-28 07:28:14.763658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:40.618 [2024-11-28 07:28:14.763822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.618 [2024-11-28 07:28:14.763836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.763857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.763871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.763915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.763932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.763953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.763967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.763989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:117280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.764076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.764126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.764477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.764602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.764639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.764710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.764782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.764855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.764984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.764999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.765043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.765079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.765116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.765152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.765187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.765223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.765259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.619 [2024-11-28 07:28:14.765295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.765348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.765385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.765421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.765457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.619 [2024-11-28 07:28:14.765499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:40.619 [2024-11-28 07:28:14.765522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.765537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.765559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.765573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.765596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.765611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.765632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:117520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.765647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.765668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.765683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.765704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.765719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.765741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.765756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.765807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:117552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.765828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.765850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.765865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.765887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.765901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.765923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.765937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.765959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.765981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:117592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.766329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:117608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.766404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.766485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.766521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.766558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:117664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.766665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.766772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.766849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:117712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.620 [2024-11-28 07:28:14.766885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.766964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:40.620 [2024-11-28 07:28:14.766986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.620 [2024-11-28 07:28:14.767001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.767022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.767037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.767058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.767073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.767094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.767109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.768859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.768893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.768923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.768940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.768963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:14.768978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:14.769015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:117736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:14.769051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.769087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.769137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.769173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.769210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:14.769246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:117784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:14.769282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:117792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:14.769334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.769373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:14.769410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:14.769466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:14.769503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:14.769539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:14.769574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.769620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.769657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.769693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:14.769715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:14.769730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.304935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:21.305026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:21.305117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:21.305151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:21.305183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:21.305214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:21.305245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:21.305276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:21.305307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:21.305383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:21.305418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:21.305449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.621 [2024-11-28 07:28:21.305481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.621 [2024-11-28 07:28:21.305512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:40.621 [2024-11-28 07:28:21.305530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.305542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.305560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.305572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.305590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.305602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.305620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.305633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.305744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.305763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.305783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.305796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.305815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.305828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.305846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.305859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.305888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.305901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.305920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.305933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.305952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.305964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.305982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.305995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.306025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.306056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.306087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.306117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.306148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.306179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.306210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.306241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.306284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.306330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.306380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.306412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.306444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.306477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.306510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.306569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.306603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.306635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.622 [2024-11-28 07:28:21.306667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:40.622 [2024-11-28 07:28:21.306686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.622 [2024-11-28 07:28:21.306714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.306732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.306753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.306773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.306787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.306806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.306818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.306837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.306849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.306868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.306880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.306900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.306913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.306932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.306944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.306963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.306975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.306994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.623 [2024-11-28 07:28:21.307037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.623 [2024-11-28 07:28:21.307136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.623 [2024-11-28 07:28:21.307199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.623 [2024-11-28 07:28:21.307230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.623 [2024-11-28 07:28:21.307423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.623 [2024-11-28 07:28:21.307487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.623 [2024-11-28 07:28:21.307550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.623 [2024-11-28 07:28:21.307637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.623 [2024-11-28 07:28:21.307969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.307993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.623 [2024-11-28 07:28:21.308007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:40.623 [2024-11-28 07:28:21.308025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.624 [2024-11-28 07:28:21.308038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.624 [2024-11-28 07:28:21.308154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.624 [2024-11-28 07:28:21.308215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.624 [2024-11-28 07:28:21.308329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.624 [2024-11-28 07:28:21.308363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.308613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.308625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.309485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.309531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.624 [2024-11-28 07:28:21.309570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.309609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.309648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.309701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.309744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.624 [2024-11-28 07:28:21.309783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.624 [2024-11-28 07:28:21.309822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.624 [2024-11-28 07:28:21.309861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.309900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:40.624 [2024-11-28 07:28:21.309925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.624 [2024-11-28 07:28:21.309938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:21.309964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:21.309977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:21.310003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:21.310015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:21.310041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:21.310054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:21.310080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:21.310093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:21.310118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:21.310131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:21.310157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:21.310176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:21.310220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:21.310238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:21.310264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:21.310277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:21.310303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:21.310317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:21.310356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:21.310376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:21.310402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:21.310416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.398707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:28.398780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.398853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:28.398875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.398898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:28.398914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.398946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:28.398961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.398983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:28.398998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:28.399034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.625 [2024-11-28 07:28:28.399071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:28.399144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:28.399179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:28.399223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:28.399258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:28.399294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:28.399343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:28.399382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:28.399417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:28.399453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:40.625 [2024-11-28 07:28:28.399474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.625 [2024-11-28 07:28:28.399489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.399524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.399559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.399605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.399644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.399681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.399717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.399774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.399810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.399844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.399893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.399926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.399960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.399980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.399994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.400028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.400173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.400630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.400666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.400775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.400884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.626 [2024-11-28 07:28:28.400956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.400999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.401020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.401042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.626 [2024-11-28 07:28:28.401057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:40.626 [2024-11-28 07:28:28.401087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.627 [2024-11-28 07:28:28.401113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.627 [2024-11-28 07:28:28.401221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.627 [2024-11-28 07:28:28.401257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.627 [2024-11-28 07:28:28.401732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.627 [2024-11-28 07:28:28.401769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.627 [2024-11-28 07:28:28.401804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.627 [2024-11-28 07:28:28.401876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.627 [2024-11-28 07:28:28.401947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.401969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.401983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.402005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.627 [2024-11-28 07:28:28.402020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.402041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.627 [2024-11-28 07:28:28.402063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.402085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.627 [2024-11-28 07:28:28.402100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:40.627 [2024-11-28 07:28:28.402128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.628 [2024-11-28 07:28:28.402178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.628 [2024-11-28 07:28:28.402249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.628 [2024-11-28 07:28:28.402610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.628 [2024-11-28 07:28:28.402680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.628 [2024-11-28 07:28:28.402882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.402975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.402989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.403018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.628 [2024-11-28 07:28:28.403033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.403054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.403069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.403090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.403116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.403137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.403152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.403173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.403188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.403210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.403224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.403245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.403259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.403280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.403296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.404250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.404278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.404326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.404346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.404377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.404392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.404422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.404437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.404480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.404496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.404526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.628 [2024-11-28 07:28:28.404541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.404571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.404586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.404616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.404631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:40.628 [2024-11-28 07:28:28.404668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.628 [2024-11-28 07:28:28.404684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:28.404714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.629 [2024-11-28 07:28:28.404729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:28.404758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.629 [2024-11-28 07:28:28.404773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:28.404803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.629 [2024-11-28 07:28:28.404818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:28.404848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.629 [2024-11-28 07:28:28.404863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:28.404892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:28.404907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:28.404937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.629 [2024-11-28 07:28:28.404952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.809737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.809779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.809803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.809844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.809860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.809871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.809884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.809895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.809908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.809919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.809931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.809942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.809955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.809966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.809978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.809989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.810001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.810012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.810025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.810036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.810049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.810060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.810073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:52936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.810083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.810096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.810123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.810137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:52960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.810149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.810170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.810183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.810197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.810209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.810222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.810237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.629 [2024-11-28 07:28:41.810251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.629 [2024-11-28 07:28:41.810264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.630 [2024-11-28 07:28:41.810278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:53640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.630 [2024-11-28 07:28:41.810290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.630 [2024-11-28 07:28:41.810303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:53648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.630 [2024-11-28 07:28:41.810315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.630 [2024-11-28 07:28:41.810354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:53656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.630 [2024-11-28 07:28:41.810368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.630 [2024-11-28 07:28:41.810382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:53664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.630 [2024-11-28 07:28:41.810394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.630 [2024-11-28 07:28:41.810408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.630 [2024-11-28 07:28:41.810420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.630 [2024-11-28 07:28:41.810463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.630 [2024-11-28 07:28:41.810474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.630 [2024-11-28 07:28:41.810487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:53008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.630 [2024-11-28 07:28:41.810513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.630 [2024-11-28 07:28:41.810525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:53024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.630 [2024-11-28 07:28:41.810536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.630 [2024-11-28 07:28:41.810548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.631 [2024-11-28 07:28:41.810566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.631 [2024-11-28 07:28:41.810579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.631 [2024-11-28 07:28:41.810590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.631 [2024-11-28 07:28:41.810603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.631 [2024-11-28 07:28:41.810614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.631 [2024-11-28 07:28:41.810626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.631 [2024-11-28 07:28:41.810637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.631 [2024-11-28 07:28:41.810650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.631 [2024-11-28 07:28:41.810679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.631 [2024-11-28 07:28:41.810692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.631 [2024-11-28 07:28:41.810703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.631 [2024-11-28 07:28:41.810716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.631 [2024-11-28 07:28:41.810733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.631 [2024-11-28 07:28:41.810747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.631 [2024-11-28 07:28:41.810758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.631 [2024-11-28 07:28:41.810771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:53704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.631 [2024-11-28 07:28:41.810782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.631 [2024-11-28 07:28:41.810795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.631 [2024-11-28 07:28:41.810806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.631 [2024-11-28 07:28:41.810818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.631 [2024-11-28 07:28:41.810830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.631 [2024-11-28 07:28:41.810842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.631 [2024-11-28 07:28:41.810854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.632 [2024-11-28 07:28:41.810866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:53736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.632 [2024-11-28 07:28:41.810878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.632 [2024-11-28 07:28:41.810896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.632 [2024-11-28 07:28:41.810908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.632 [2024-11-28 07:28:41.810921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.632 [2024-11-28 07:28:41.810933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.632 [2024-11-28 07:28:41.810945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.632 [2024-11-28 07:28:41.810957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.632 [2024-11-28 07:28:41.810969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:53768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.632 [2024-11-28 07:28:41.810981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.632 [2024-11-28 07:28:41.810994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.632 [2024-11-28 07:28:41.811005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.632 [2024-11-28 07:28:41.811018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.633 [2024-11-28 07:28:41.811029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:53792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.633 [2024-11-28 07:28:41.811053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.633 [2024-11-28 07:28:41.811079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.633 [2024-11-28 07:28:41.811119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.633 [2024-11-28 07:28:41.811165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.633 [2024-11-28 07:28:41.811191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.633 [2024-11-28 07:28:41.811216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.633 [2024-11-28 07:28:41.811247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.633 [2024-11-28 07:28:41.811273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.633 [2024-11-28 07:28:41.811299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.633 [2024-11-28 07:28:41.811340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.633 [2024-11-28 07:28:41.811366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.633 [2024-11-28 07:28:41.811392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.633 [2024-11-28 07:28:41.811430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.633 [2024-11-28 07:28:41.811445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.634 [2024-11-28 07:28:41.811462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.634 [2024-11-28 07:28:41.811491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.634 [2024-11-28 07:28:41.811532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.634 [2024-11-28 07:28:41.811545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.634 [2024-11-28 07:28:41.811556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.634 [2024-11-28 07:28:41.811569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.634 [2024-11-28 07:28:41.811581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.634 [2024-11-28 07:28:41.811594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.634 [2024-11-28 07:28:41.811605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.634 [2024-11-28 07:28:41.811618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.634 [2024-11-28 07:28:41.811629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.634 [2024-11-28 07:28:41.811642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.634 [2024-11-28 07:28:41.811664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.634 [2024-11-28 07:28:41.811678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.634 [2024-11-28 07:28:41.811690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.634 [2024-11-28 07:28:41.811703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.634 [2024-11-28 07:28:41.811714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.634 [2024-11-28 07:28:41.811728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.634 [2024-11-28 07:28:41.811754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.634 [2024-11-28 07:28:41.811769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.634 [2024-11-28 07:28:41.811781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.634 [2024-11-28 07:28:41.811794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.635 [2024-11-28 07:28:41.811805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.811819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.635 [2024-11-28 07:28:41.811847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.811860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.811872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.811886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.811898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.811911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.811923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.811937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.811949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.811962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.811974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.811988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.812000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.812018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.812031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.812045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.812057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.812070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.812082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.812139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.812164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.812179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.812193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.812208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.635 [2024-11-28 07:28:41.812236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.635 [2024-11-28 07:28:41.812250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.636 [2024-11-28 07:28:41.812263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.636 [2024-11-28 07:28:41.812290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.636 [2024-11-28 07:28:41.812317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.636 [2024-11-28 07:28:41.812371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.636 [2024-11-28 07:28:41.812398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.636 [2024-11-28 07:28:41.812424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.636 [2024-11-28 07:28:41.812471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.636 [2024-11-28 07:28:41.812526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.636 [2024-11-28 07:28:41.812551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.636 [2024-11-28 07:28:41.812575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.636 [2024-11-28 07:28:41.812599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.636 [2024-11-28 07:28:41.812623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.636 [2024-11-28 07:28:41.812636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.637 [2024-11-28 07:28:41.812648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.637 [2024-11-28 07:28:41.812661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.637 [2024-11-28 07:28:41.812692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.637 [2024-11-28 07:28:41.812705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.637 [2024-11-28 07:28:41.812716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.637 [2024-11-28 07:28:41.812730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.637 [2024-11-28 07:28:41.812741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.637 [2024-11-28 07:28:41.812754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.637 [2024-11-28 07:28:41.812765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.637 [2024-11-28 07:28:41.812777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.637 [2024-11-28 07:28:41.812789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.637 [2024-11-28 07:28:41.812801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.637 [2024-11-28 07:28:41.812812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.637 [2024-11-28 07:28:41.812829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.637 [2024-11-28 07:28:41.812841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.637 [2024-11-28 07:28:41.812853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.637 [2024-11-28 07:28:41.812864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.812876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.638 [2024-11-28 07:28:41.812887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.812899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.638 [2024-11-28 07:28:41.812911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.812928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.638 [2024-11-28 07:28:41.812940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.812953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.638 [2024-11-28 07:28:41.812964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.812977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.638 [2024-11-28 07:28:41.813004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.813017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.638 [2024-11-28 07:28:41.813028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.813041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.638 [2024-11-28 07:28:41.813052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.813065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.638 [2024-11-28 07:28:41.813076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.813089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.638 [2024-11-28 07:28:41.813120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.813150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.638 [2024-11-28 07:28:41.813162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.813176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.638 [2024-11-28 07:28:41.813193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.638 [2024-11-28 07:28:41.813208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.638 [2024-11-28 07:28:41.813220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.639 [2024-11-28 07:28:41.813233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.639 [2024-11-28 07:28:41.813246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.639 [2024-11-28 07:28:41.813259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.639 [2024-11-28 07:28:41.813271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.639 [2024-11-28 07:28:41.813285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.639 [2024-11-28 07:28:41.813296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.639 [2024-11-28 07:28:41.813310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.639 [2024-11-28 07:28:41.813322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.639 [2024-11-28 07:28:41.813335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:40.639 [2024-11-28 07:28:41.813346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.639 [2024-11-28 07:28:41.813360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.639 [2024-11-28 07:28:41.813372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.639 [2024-11-28 07:28:41.813390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.639 [2024-11-28 07:28:41.813403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.640 [2024-11-28 07:28:41.813416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.640 [2024-11-28 07:28:41.813448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.640 [2024-11-28 07:28:41.813463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.640 [2024-11-28 07:28:41.813490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.640 [2024-11-28 07:28:41.813503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.640 [2024-11-28 07:28:41.813529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.640 [2024-11-28 07:28:41.813542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:40.640 [2024-11-28 07:28:41.813553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.640 [2024-11-28 07:28:41.813565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f4810 is same with the state(5) to be set 00:19:40.640 [2024-11-28 07:28:41.813586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:40.640 [2024-11-28 07:28:41.813595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:40.640 [2024-11-28 07:28:41.813609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53608 len:8 PRP1 0x0 PRP2 0x0 00:19:40.640 [2024-11-28 07:28:41.813621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.640 [2024-11-28 07:28:41.813675] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16f4810 was disconnected and freed. reset controller. 00:19:40.641 [2024-11-28 07:28:41.813786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.641 [2024-11-28 07:28:41.813810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.641 [2024-11-28 07:28:41.813823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.641 [2024-11-28 07:28:41.813834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.641 [2024-11-28 07:28:41.813846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.641 [2024-11-28 07:28:41.813857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.641 [2024-11-28 07:28:41.813869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:40.641 [2024-11-28 07:28:41.813880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:40.641 [2024-11-28 07:28:41.813891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167df30 is same with the state(5) to be set 00:19:40.641 [2024-11-28 07:28:41.814906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.641 [2024-11-28 07:28:41.814940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167df30 (9): Bad file descriptor 00:19:40.641 [2024-11-28 07:28:41.815256] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.641 [2024-11-28 07:28:41.815332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.641 [2024-11-28 07:28:41.815378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.641 [2024-11-28 07:28:41.815412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x167df30 with addr=10.0.0.2, port=4421 00:19:40.641 [2024-11-28 07:28:41.815430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167df30 is same with the state(5) to be set 00:19:40.642 [2024-11-28 07:28:41.815501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167df30 (9): Bad file descriptor 00:19:40.642 [2024-11-28 07:28:41.815544] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:40.642 [2024-11-28 07:28:41.815558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:40.642 [2024-11-28 07:28:41.815571] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:40.642 [2024-11-28 07:28:41.815597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.642 [2024-11-28 07:28:41.815612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.642 [2024-11-28 07:28:51.864888] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:40.642 Received shutdown signal, test time was about 55.575981 seconds 00:19:40.642 00:19:40.642 Latency(us) 00:19:40.642 [2024-11-28T07:29:02.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.642 [2024-11-28T07:29:02.917Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.642 Verification LBA range: start 0x0 length 0x4000 00:19:40.642 Nvme0n1 : 55.58 11676.96 45.61 0.00 0.00 10942.49 277.41 7015926.69 00:19:40.642 [2024-11-28T07:29:02.917Z] =================================================================================================================== 00:19:40.642 [2024-11-28T07:29:02.917Z] Total : 11676.96 45.61 0.00 0.00 10942.49 277.41 7015926.69 00:19:40.642 07:29:02 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:40.642 07:29:02 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:40.642 07:29:02 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:40.642 07:29:02 -- host/multipath.sh@125 -- # nvmftestfini 00:19:40.643 07:29:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:40.643 07:29:02 -- nvmf/common.sh@116 -- # sync 00:19:40.643 07:29:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:40.643 07:29:02 -- nvmf/common.sh@119 -- # set +e 00:19:40.643 07:29:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:40.643 07:29:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:40.643 rmmod nvme_tcp 00:19:40.643 rmmod nvme_fabrics 00:19:40.643 rmmod nvme_keyring 00:19:40.643 07:29:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:40.643 07:29:02 -- nvmf/common.sh@123 -- # set -e 00:19:40.643 07:29:02 -- nvmf/common.sh@124 -- # return 0 00:19:40.643 07:29:02 -- nvmf/common.sh@477 -- # '[' -n 84902 ']' 00:19:40.643 07:29:02 -- nvmf/common.sh@478 -- # killprocess 84902 00:19:40.643 07:29:02 -- common/autotest_common.sh@936 -- # '[' -z 84902 ']' 00:19:40.643 07:29:02 -- common/autotest_common.sh@940 -- # kill -0 84902 00:19:40.643 07:29:02 -- common/autotest_common.sh@941 -- # uname 00:19:40.643 07:29:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:40.643 07:29:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84902 00:19:40.643 killing process with pid 84902 00:19:40.643 07:29:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:40.643 07:29:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:40.643 07:29:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84902' 00:19:40.643 07:29:02 -- common/autotest_common.sh@955 -- # kill 84902 00:19:40.643 07:29:02 -- common/autotest_common.sh@960 -- # wait 84902 00:19:40.643 07:29:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:40.643 07:29:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:40.643 07:29:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:40.643 07:29:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.643 07:29:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:40.643 07:29:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.643 07:29:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.643 07:29:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.909 07:29:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:40.909 00:19:40.909 real 1m1.797s 00:19:40.909 user 2m49.642s 00:19:40.909 sys 0m19.649s 00:19:40.909 07:29:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:40.909 07:29:02 -- common/autotest_common.sh@10 -- # set +x 00:19:40.909 ************************************ 00:19:40.909 END TEST nvmf_multipath 00:19:40.909 ************************************ 00:19:40.909 07:29:02 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:40.909 07:29:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:40.909 07:29:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:40.909 07:29:02 -- common/autotest_common.sh@10 -- # set +x 00:19:40.909 ************************************ 00:19:40.909 START TEST nvmf_timeout 00:19:40.909 ************************************ 00:19:40.910 07:29:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:40.910 * Looking for test storage... 00:19:40.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:40.910 07:29:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:40.910 07:29:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:40.910 07:29:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:40.910 07:29:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:40.910 07:29:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:40.910 07:29:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:40.910 07:29:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:40.910 07:29:03 -- scripts/common.sh@335 -- # IFS=.-: 00:19:40.910 07:29:03 -- scripts/common.sh@335 -- # read -ra ver1 00:19:40.910 07:29:03 -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.910 07:29:03 -- scripts/common.sh@336 -- # read -ra ver2 00:19:40.910 07:29:03 -- scripts/common.sh@337 -- # local 'op=<' 00:19:40.910 07:29:03 -- scripts/common.sh@339 -- # ver1_l=2 00:19:40.910 07:29:03 -- scripts/common.sh@340 -- # ver2_l=1 00:19:40.910 07:29:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:40.910 07:29:03 -- scripts/common.sh@343 -- # case "$op" in 00:19:40.910 07:29:03 -- scripts/common.sh@344 -- # : 1 00:19:40.910 07:29:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:40.910 07:29:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.910 07:29:03 -- scripts/common.sh@364 -- # decimal 1 00:19:40.910 07:29:03 -- scripts/common.sh@352 -- # local d=1 00:19:40.910 07:29:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.910 07:29:03 -- scripts/common.sh@354 -- # echo 1 00:19:40.910 07:29:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:40.910 07:29:03 -- scripts/common.sh@365 -- # decimal 2 00:19:40.910 07:29:03 -- scripts/common.sh@352 -- # local d=2 00:19:40.910 07:29:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.910 07:29:03 -- scripts/common.sh@354 -- # echo 2 00:19:40.910 07:29:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:40.910 07:29:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:40.910 07:29:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:40.910 07:29:03 -- scripts/common.sh@367 -- # return 0 00:19:40.910 07:29:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.910 07:29:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:40.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.910 --rc genhtml_branch_coverage=1 00:19:40.910 --rc genhtml_function_coverage=1 00:19:40.910 --rc genhtml_legend=1 00:19:40.910 --rc geninfo_all_blocks=1 00:19:40.910 --rc geninfo_unexecuted_blocks=1 00:19:40.910 00:19:40.910 ' 00:19:40.910 07:29:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:40.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.910 --rc genhtml_branch_coverage=1 00:19:40.910 --rc genhtml_function_coverage=1 00:19:40.910 --rc genhtml_legend=1 00:19:40.910 --rc geninfo_all_blocks=1 00:19:40.910 --rc geninfo_unexecuted_blocks=1 00:19:40.910 00:19:40.910 ' 00:19:40.910 07:29:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:40.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.910 --rc genhtml_branch_coverage=1 00:19:40.910 --rc genhtml_function_coverage=1 00:19:40.910 --rc genhtml_legend=1 00:19:40.910 --rc geninfo_all_blocks=1 00:19:40.910 --rc geninfo_unexecuted_blocks=1 00:19:40.910 00:19:40.910 ' 00:19:40.910 07:29:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:40.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.910 --rc genhtml_branch_coverage=1 00:19:40.910 --rc genhtml_function_coverage=1 00:19:40.910 --rc genhtml_legend=1 00:19:40.910 --rc geninfo_all_blocks=1 00:19:40.910 --rc geninfo_unexecuted_blocks=1 00:19:40.910 00:19:40.910 ' 00:19:40.910 07:29:03 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.910 07:29:03 -- nvmf/common.sh@7 -- # uname -s 00:19:40.910 07:29:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.910 07:29:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.910 07:29:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.910 07:29:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.910 07:29:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.910 07:29:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.910 07:29:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.910 07:29:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.910 07:29:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.910 07:29:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.910 07:29:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:19:40.910 07:29:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:19:40.910 07:29:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.910 07:29:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.910 07:29:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:40.910 07:29:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.910 07:29:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.910 07:29:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.910 07:29:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.910 07:29:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.910 07:29:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.910 07:29:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.910 07:29:03 -- paths/export.sh@5 -- # export PATH 00:19:40.910 07:29:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.910 07:29:03 -- nvmf/common.sh@46 -- # : 0 00:19:40.910 07:29:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:40.910 07:29:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:40.910 07:29:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:40.910 07:29:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.910 07:29:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.910 07:29:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:40.910 07:29:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:40.910 07:29:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:40.910 07:29:03 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:40.910 07:29:03 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:40.910 07:29:03 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.910 07:29:03 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:40.910 07:29:03 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:40.910 07:29:03 -- host/timeout.sh@19 -- # nvmftestinit 00:19:40.910 07:29:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:40.910 07:29:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.910 07:29:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:40.910 07:29:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:40.910 07:29:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:40.910 07:29:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.910 07:29:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.910 07:29:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.910 07:29:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:40.910 07:29:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:40.910 07:29:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:40.910 07:29:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:40.910 07:29:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:40.910 07:29:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:40.910 07:29:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.910 07:29:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.910 07:29:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:40.910 07:29:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:40.910 07:29:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.910 07:29:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.910 07:29:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.910 07:29:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.910 07:29:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.911 07:29:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.911 07:29:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.911 07:29:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.911 07:29:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:40.911 07:29:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:41.178 Cannot find device "nvmf_tgt_br" 00:19:41.178 07:29:03 -- nvmf/common.sh@154 -- # true 00:19:41.178 07:29:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.178 Cannot find device "nvmf_tgt_br2" 00:19:41.178 07:29:03 -- nvmf/common.sh@155 -- # true 00:19:41.178 07:29:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:41.178 07:29:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:41.178 Cannot find device "nvmf_tgt_br" 00:19:41.178 07:29:03 -- nvmf/common.sh@157 -- # true 00:19:41.178 07:29:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:41.178 Cannot find device "nvmf_tgt_br2" 00:19:41.178 07:29:03 -- nvmf/common.sh@158 -- # true 00:19:41.178 07:29:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:41.178 07:29:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:41.178 07:29:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.178 07:29:03 -- nvmf/common.sh@161 -- # true 00:19:41.178 07:29:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.178 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.178 07:29:03 -- nvmf/common.sh@162 -- # true 00:19:41.178 07:29:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:41.178 07:29:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:41.178 07:29:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:41.178 07:29:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:41.178 07:29:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:41.178 07:29:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:41.178 07:29:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:41.178 07:29:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:41.178 07:29:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:41.178 07:29:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:41.178 07:29:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:41.178 07:29:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:41.178 07:29:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:41.178 07:29:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:41.178 07:29:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:41.178 07:29:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:41.179 07:29:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:41.179 07:29:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:41.179 07:29:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:41.179 07:29:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:41.179 07:29:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:41.179 07:29:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:41.179 07:29:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:41.498 07:29:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:41.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:19:41.498 00:19:41.498 --- 10.0.0.2 ping statistics --- 00:19:41.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.498 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:41.498 07:29:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:41.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:41.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:41.498 00:19:41.498 --- 10.0.0.3 ping statistics --- 00:19:41.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.498 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:41.498 07:29:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:41.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:41.499 00:19:41.499 --- 10.0.0.1 ping statistics --- 00:19:41.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.499 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:41.499 07:29:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.499 07:29:03 -- nvmf/common.sh@421 -- # return 0 00:19:41.499 07:29:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:41.499 07:29:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.499 07:29:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:41.499 07:29:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:41.499 07:29:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.499 07:29:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:41.499 07:29:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:41.499 07:29:03 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:41.499 07:29:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:41.499 07:29:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.499 07:29:03 -- common/autotest_common.sh@10 -- # set +x 00:19:41.499 07:29:03 -- nvmf/common.sh@469 -- # nvmfpid=86085 00:19:41.499 07:29:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:41.499 07:29:03 -- nvmf/common.sh@470 -- # waitforlisten 86085 00:19:41.499 07:29:03 -- common/autotest_common.sh@829 -- # '[' -z 86085 ']' 00:19:41.499 07:29:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.499 07:29:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.499 07:29:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.499 07:29:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.499 07:29:03 -- common/autotest_common.sh@10 -- # set +x 00:19:41.499 [2024-11-28 07:29:03.541406] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:41.499 [2024-11-28 07:29:03.541496] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.499 [2024-11-28 07:29:03.678931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:41.769 [2024-11-28 07:29:03.761413] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:41.769 [2024-11-28 07:29:03.761584] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.769 [2024-11-28 07:29:03.761600] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.769 [2024-11-28 07:29:03.761611] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.769 [2024-11-28 07:29:03.763358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.769 [2024-11-28 07:29:03.763385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.336 07:29:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.336 07:29:04 -- common/autotest_common.sh@862 -- # return 0 00:19:42.336 07:29:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:42.336 07:29:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:42.336 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:19:42.336 07:29:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.336 07:29:04 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:42.336 07:29:04 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:42.595 [2024-11-28 07:29:04.782458] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.595 07:29:04 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:42.854 Malloc0 00:19:43.120 07:29:05 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:43.381 07:29:05 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:43.640 07:29:05 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.640 [2024-11-28 07:29:05.895659] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.640 07:29:05 -- host/timeout.sh@32 -- # bdevperf_pid=86140 00:19:43.640 07:29:05 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:43.640 07:29:05 -- host/timeout.sh@34 -- # waitforlisten 86140 /var/tmp/bdevperf.sock 00:19:43.640 07:29:05 -- common/autotest_common.sh@829 -- # '[' -z 86140 ']' 00:19:43.640 07:29:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.640 07:29:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.640 07:29:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.899 07:29:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.899 07:29:05 -- common/autotest_common.sh@10 -- # set +x 00:19:43.899 [2024-11-28 07:29:05.958387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:43.899 [2024-11-28 07:29:05.958514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86140 ] 00:19:43.899 [2024-11-28 07:29:06.092056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.158 [2024-11-28 07:29:06.184687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.726 07:29:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.726 07:29:06 -- common/autotest_common.sh@862 -- # return 0 00:19:44.726 07:29:06 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:44.986 07:29:07 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:45.244 NVMe0n1 00:19:45.244 07:29:07 -- host/timeout.sh@51 -- # rpc_pid=86163 00:19:45.244 07:29:07 -- host/timeout.sh@53 -- # sleep 1 00:19:45.245 07:29:07 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:45.503 Running I/O for 10 seconds... 00:19:46.442 07:29:08 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.442 [2024-11-28 07:29:08.673198] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673292] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673304] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673327] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673336] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673352] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673361] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673386] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673394] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673402] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18789c0 is same with the state(5) to be set 00:19:46.442 [2024-11-28 07:29:08.673542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.442 [2024-11-28 07:29:08.673575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.443 [2024-11-28 07:29:08.673905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.443 [2024-11-28 07:29:08.673925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.673944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.443 [2024-11-28 07:29:08.673963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.443 [2024-11-28 07:29:08.673982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.673992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.443 [2024-11-28 07:29:08.674001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:119608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.443 [2024-11-28 07:29:08.674328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.443 [2024-11-28 07:29:08.674349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.443 [2024-11-28 07:29:08.674392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.443 [2024-11-28 07:29:08.674413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.443 [2024-11-28 07:29:08.674424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.444 [2024-11-28 07:29:08.674433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.444 [2024-11-28 07:29:08.674526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.444 [2024-11-28 07:29:08.674584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.444 [2024-11-28 07:29:08.674603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.444 [2024-11-28 07:29:08.674642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.444 [2024-11-28 07:29:08.674812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.444 [2024-11-28 07:29:08.674830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.674982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.674993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.675001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.675011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.675020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.675037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.675046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.675057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.675066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.675076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.675085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.675096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.675121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.675141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.675149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.675160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.444 [2024-11-28 07:29:08.675169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.675180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.444 [2024-11-28 07:29:08.675188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.675199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.444 [2024-11-28 07:29:08.675207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.675218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.444 [2024-11-28 07:29:08.675228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.444 [2024-11-28 07:29:08.675239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.675268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.675288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.675307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.675524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.675562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.675585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.675810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.675829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.675847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.675907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.675945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.675983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.675993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:46.445 [2024-11-28 07:29:08.676001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.676011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.445 [2024-11-28 07:29:08.676020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.445 [2024-11-28 07:29:08.676031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.446 [2024-11-28 07:29:08.676040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.446 [2024-11-28 07:29:08.676050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.446 [2024-11-28 07:29:08.676059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.446 [2024-11-28 07:29:08.676069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.446 [2024-11-28 07:29:08.676077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.446 [2024-11-28 07:29:08.676088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.446 [2024-11-28 07:29:08.676096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.446 [2024-11-28 07:29:08.676123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.446 [2024-11-28 07:29:08.676170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.446 [2024-11-28 07:29:08.676182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.446 [2024-11-28 07:29:08.676192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.446 [2024-11-28 07:29:08.676203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.446 [2024-11-28 07:29:08.676212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.446 [2024-11-28 07:29:08.676223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.446 [2024-11-28 07:29:08.676233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.446 [2024-11-28 07:29:08.676244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.446 [2024-11-28 07:29:08.676258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.446 [2024-11-28 07:29:08.676269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9cf0 is same with the state(5) to be set 00:19:46.446 [2024-11-28 07:29:08.676281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:46.446 [2024-11-28 07:29:08.676289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:46.446 [2024-11-28 07:29:08.676297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120152 len:8 PRP1 0x0 PRP2 0x0 00:19:46.446 [2024-11-28 07:29:08.676317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.446 [2024-11-28 07:29:08.676373] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24a9cf0 was disconnected and freed. reset controller. 00:19:46.446 [2024-11-28 07:29:08.676625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:46.446 [2024-11-28 07:29:08.676709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459c20 (9): Bad file descriptor 00:19:46.446 [2024-11-28 07:29:08.676817] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.446 [2024-11-28 07:29:08.676891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.446 [2024-11-28 07:29:08.676943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.446 [2024-11-28 07:29:08.676959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459c20 with addr=10.0.0.2, port=4420 00:19:46.446 [2024-11-28 07:29:08.676970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459c20 is same with the state(5) to be set 00:19:46.446 [2024-11-28 07:29:08.676989] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459c20 (9): Bad file descriptor 00:19:46.446 [2024-11-28 07:29:08.677005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:46.446 [2024-11-28 07:29:08.677013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:46.446 [2024-11-28 07:29:08.677023] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:46.446 [2024-11-28 07:29:08.677043] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:46.446 [2024-11-28 07:29:08.677053] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:46.446 07:29:08 -- host/timeout.sh@56 -- # sleep 2 00:19:48.984 [2024-11-28 07:29:10.677185] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.984 [2024-11-28 07:29:10.677292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.984 [2024-11-28 07:29:10.677348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.984 [2024-11-28 07:29:10.677365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459c20 with addr=10.0.0.2, port=4420 00:19:48.984 [2024-11-28 07:29:10.677378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459c20 is same with the state(5) to be set 00:19:48.984 [2024-11-28 07:29:10.677402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459c20 (9): Bad file descriptor 00:19:48.984 [2024-11-28 07:29:10.677466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:48.984 [2024-11-28 07:29:10.677477] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:48.984 [2024-11-28 07:29:10.677486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:48.984 [2024-11-28 07:29:10.677549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:48.984 [2024-11-28 07:29:10.677583] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:48.984 07:29:10 -- host/timeout.sh@57 -- # get_controller 00:19:48.984 07:29:10 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:48.984 07:29:10 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:48.984 07:29:10 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:48.984 07:29:10 -- host/timeout.sh@58 -- # get_bdev 00:19:48.984 07:29:10 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:48.984 07:29:10 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:48.984 07:29:11 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:48.984 07:29:11 -- host/timeout.sh@61 -- # sleep 5 00:19:50.890 [2024-11-28 07:29:12.677719] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.890 [2024-11-28 07:29:12.677834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.890 [2024-11-28 07:29:12.677874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.890 [2024-11-28 07:29:12.677890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2459c20 with addr=10.0.0.2, port=4420 00:19:50.890 [2024-11-28 07:29:12.677902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2459c20 is same with the state(5) to be set 00:19:50.890 [2024-11-28 07:29:12.677925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2459c20 (9): Bad file descriptor 00:19:50.890 [2024-11-28 07:29:12.677974] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:50.890 [2024-11-28 07:29:12.677983] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:50.890 [2024-11-28 07:29:12.677994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:50.890 [2024-11-28 07:29:12.678022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:50.890 [2024-11-28 07:29:12.678032] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:52.796 [2024-11-28 07:29:14.678075] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:52.796 [2024-11-28 07:29:14.678153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:52.796 [2024-11-28 07:29:14.678181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:52.796 [2024-11-28 07:29:14.678207] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:52.796 [2024-11-28 07:29:14.678234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.734 00:19:53.734 Latency(us) 00:19:53.734 [2024-11-28T07:29:16.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.734 [2024-11-28T07:29:16.009Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:53.734 Verification LBA range: start 0x0 length 0x4000 00:19:53.734 NVMe0n1 : 8.12 1842.43 7.20 15.76 0.00 68789.14 3276.80 7015926.69 00:19:53.734 [2024-11-28T07:29:16.009Z] =================================================================================================================== 00:19:53.734 [2024-11-28T07:29:16.009Z] Total : 1842.43 7.20 15.76 0.00 68789.14 3276.80 7015926.69 00:19:53.734 0 00:19:53.994 07:29:16 -- host/timeout.sh@62 -- # get_controller 00:19:53.994 07:29:16 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:53.994 07:29:16 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:54.253 07:29:16 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:54.253 07:29:16 -- host/timeout.sh@63 -- # get_bdev 00:19:54.253 07:29:16 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:54.253 07:29:16 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:54.513 07:29:16 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:54.513 07:29:16 -- host/timeout.sh@65 -- # wait 86163 00:19:54.513 07:29:16 -- host/timeout.sh@67 -- # killprocess 86140 00:19:54.513 07:29:16 -- common/autotest_common.sh@936 -- # '[' -z 86140 ']' 00:19:54.513 07:29:16 -- common/autotest_common.sh@940 -- # kill -0 86140 00:19:54.513 07:29:16 -- common/autotest_common.sh@941 -- # uname 00:19:54.513 07:29:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:54.513 07:29:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86140 00:19:54.513 07:29:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:54.513 07:29:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:54.513 killing process with pid 86140 00:19:54.513 07:29:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86140' 00:19:54.513 Received shutdown signal, test time was about 9.213136 seconds 00:19:54.513 00:19:54.513 Latency(us) 00:19:54.513 [2024-11-28T07:29:16.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.513 [2024-11-28T07:29:16.788Z] =================================================================================================================== 00:19:54.513 [2024-11-28T07:29:16.788Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.513 07:29:16 -- common/autotest_common.sh@955 -- # kill 86140 00:19:54.513 07:29:16 -- common/autotest_common.sh@960 -- # wait 86140 00:19:54.772 07:29:16 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.032 [2024-11-28 07:29:17.175212] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.032 07:29:17 -- host/timeout.sh@74 -- # bdevperf_pid=86286 00:19:55.032 07:29:17 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:55.032 07:29:17 -- host/timeout.sh@76 -- # waitforlisten 86286 /var/tmp/bdevperf.sock 00:19:55.032 07:29:17 -- common/autotest_common.sh@829 -- # '[' -z 86286 ']' 00:19:55.032 07:29:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.032 07:29:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.032 07:29:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.032 07:29:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.032 07:29:17 -- common/autotest_common.sh@10 -- # set +x 00:19:55.032 [2024-11-28 07:29:17.247400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:55.032 [2024-11-28 07:29:17.247539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86286 ] 00:19:55.292 [2024-11-28 07:29:17.389188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.292 [2024-11-28 07:29:17.464989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.230 07:29:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.230 07:29:18 -- common/autotest_common.sh@862 -- # return 0 00:19:56.230 07:29:18 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:56.230 07:29:18 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:56.490 NVMe0n1 00:19:56.490 07:29:18 -- host/timeout.sh@84 -- # rpc_pid=86304 00:19:56.490 07:29:18 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:56.490 07:29:18 -- host/timeout.sh@86 -- # sleep 1 00:19:56.749 Running I/O for 10 seconds... 00:19:57.687 07:29:19 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.949 [2024-11-28 07:29:19.971636] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971701] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971744] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971752] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971760] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971791] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971799] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971806] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971814] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971836] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971843] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971850] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971874] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971905] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971936] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971944] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.971952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878520 is same with the state(5) to be set 00:19:57.949 [2024-11-28 07:29:19.972039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.949 [2024-11-28 07:29:19.972369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.949 [2024-11-28 07:29:19.972378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.950 [2024-11-28 07:29:19.972733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.950 [2024-11-28 07:29:19.972752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.950 [2024-11-28 07:29:19.972771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.950 [2024-11-28 07:29:19.972789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.950 [2024-11-28 07:29:19.972807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.950 [2024-11-28 07:29:19.972883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.950 [2024-11-28 07:29:19.972902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.950 [2024-11-28 07:29:19.972921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.950 [2024-11-28 07:29:19.972976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.972986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.972994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.973004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.950 [2024-11-28 07:29:19.973012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.973022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.973031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.950 [2024-11-28 07:29:19.973041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.950 [2024-11-28 07:29:19.973050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.951 [2024-11-28 07:29:19.973298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.951 [2024-11-28 07:29:19.973338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.951 [2024-11-28 07:29:19.973358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.951 [2024-11-28 07:29:19.973430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.951 [2024-11-28 07:29:19.973474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.951 [2024-11-28 07:29:19.973525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:115856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.951 [2024-11-28 07:29:19.973544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.951 [2024-11-28 07:29:19.973582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.951 [2024-11-28 07:29:19.973601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.951 [2024-11-28 07:29:19.973637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.951 [2024-11-28 07:29:19.973754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.951 [2024-11-28 07:29:19.973763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.973782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.973800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.973820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.973838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.952 [2024-11-28 07:29:19.973857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.952 [2024-11-28 07:29:19.973883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.973902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.952 [2024-11-28 07:29:19.973921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.973939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.973957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.952 [2024-11-28 07:29:19.973976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.973986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.952 [2024-11-28 07:29:19.973994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.952 [2024-11-28 07:29:19.974031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.952 [2024-11-28 07:29:19.974084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.952 [2024-11-28 07:29:19.974164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.952 [2024-11-28 07:29:19.974183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.952 [2024-11-28 07:29:19.974203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.952 [2024-11-28 07:29:19.974416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.952 [2024-11-28 07:29:19.974455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.952 [2024-11-28 07:29:19.974466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.953 [2024-11-28 07:29:19.974509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.953 [2024-11-28 07:29:19.974561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.953 [2024-11-28 07:29:19.974617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.953 [2024-11-28 07:29:19.974679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.953 [2024-11-28 07:29:19.974716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.953 [2024-11-28 07:29:19.974784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.953 [2024-11-28 07:29:19.974824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:57.953 [2024-11-28 07:29:19.974843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:57.953 [2024-11-28 07:29:19.974915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.974925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e2cf0 is same with the state(5) to be set 00:19:57.953 [2024-11-28 07:29:19.974936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:57.953 [2024-11-28 07:29:19.974948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:57.953 [2024-11-28 07:29:19.974955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115552 len:8 PRP1 0x0 PRP2 0x0 00:19:57.953 [2024-11-28 07:29:19.974964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.953 [2024-11-28 07:29:19.975014] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9e2cf0 was disconnected and freed. reset controller. 00:19:57.953 [2024-11-28 07:29:19.975281] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.953 [2024-11-28 07:29:19.975365] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x992c20 (9): Bad file descriptor 00:19:57.953 [2024-11-28 07:29:19.975509] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.953 [2024-11-28 07:29:19.975605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.953 [2024-11-28 07:29:19.975663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:57.953 [2024-11-28 07:29:19.975679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x992c20 with addr=10.0.0.2, port=4420 00:19:57.953 [2024-11-28 07:29:19.975690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992c20 is same with the state(5) to be set 00:19:57.953 [2024-11-28 07:29:19.975709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x992c20 (9): Bad file descriptor 00:19:57.953 [2024-11-28 07:29:19.975741] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.953 [2024-11-28 07:29:19.975752] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:57.953 [2024-11-28 07:29:19.975777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.953 [2024-11-28 07:29:19.975797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:57.953 [2024-11-28 07:29:19.975808] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.953 07:29:19 -- host/timeout.sh@90 -- # sleep 1 00:19:58.891 [2024-11-28 07:29:20.975905] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.891 [2024-11-28 07:29:20.975999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.891 [2024-11-28 07:29:20.976041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.891 [2024-11-28 07:29:20.976056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x992c20 with addr=10.0.0.2, port=4420 00:19:58.891 [2024-11-28 07:29:20.976067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992c20 is same with the state(5) to be set 00:19:58.891 [2024-11-28 07:29:20.976089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x992c20 (9): Bad file descriptor 00:19:58.891 [2024-11-28 07:29:20.976131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.891 [2024-11-28 07:29:20.976170] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:58.891 [2024-11-28 07:29:20.976180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.891 [2024-11-28 07:29:20.976204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.891 [2024-11-28 07:29:20.976215] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:58.891 07:29:20 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:59.151 [2024-11-28 07:29:21.241442] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.151 07:29:21 -- host/timeout.sh@92 -- # wait 86304 00:19:59.718 [2024-11-28 07:29:21.989479] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:07.838 00:20:07.838 Latency(us) 00:20:07.838 [2024-11-28T07:29:30.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.838 [2024-11-28T07:29:30.113Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:07.838 Verification LBA range: start 0x0 length 0x4000 00:20:07.838 NVMe0n1 : 10.01 10259.85 40.08 0.00 0.00 12453.29 1191.56 3019898.88 00:20:07.838 [2024-11-28T07:29:30.113Z] =================================================================================================================== 00:20:07.838 [2024-11-28T07:29:30.113Z] Total : 10259.85 40.08 0.00 0.00 12453.29 1191.56 3019898.88 00:20:07.838 0 00:20:07.838 07:29:28 -- host/timeout.sh@97 -- # rpc_pid=86414 00:20:07.838 07:29:28 -- host/timeout.sh@98 -- # sleep 1 00:20:07.838 07:29:28 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:07.838 Running I/O for 10 seconds... 00:20:07.838 07:29:29 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:08.101 [2024-11-28 07:29:30.150365] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.101 [2024-11-28 07:29:30.150441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.101 [2024-11-28 07:29:30.150458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.101 [2024-11-28 07:29:30.150468] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.102 [2024-11-28 07:29:30.150478] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.102 [2024-11-28 07:29:30.150487] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.102 [2024-11-28 07:29:30.150514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.102 [2024-11-28 07:29:30.150523] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.102 [2024-11-28 07:29:30.150533] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.102 [2024-11-28 07:29:30.150543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.102 [2024-11-28 07:29:30.150552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.102 [2024-11-28 07:29:30.150561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.102 [2024-11-28 07:29:30.150570] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.102 [2024-11-28 07:29:30.150583] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187ef10 is same with the state(5) to be set 00:20:08.102 [2024-11-28 07:29:30.150645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.150986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.150999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.102 [2024-11-28 07:29:30.151044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.102 [2024-11-28 07:29:30.151114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.102 [2024-11-28 07:29:30.151135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.102 [2024-11-28 07:29:30.151155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.102 [2024-11-28 07:29:30.151175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.102 [2024-11-28 07:29:30.151214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:95992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.102 [2024-11-28 07:29:30.151234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.102 [2024-11-28 07:29:30.151433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.102 [2024-11-28 07:29:30.151455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.102 [2024-11-28 07:29:30.151467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.151496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.151556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.151626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.151660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.151698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.151718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.151756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.151775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.151988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.151998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.152008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.152027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.152048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.152068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.152089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.152108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.152128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.152159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.152189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.152209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.152229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.152251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.103 [2024-11-28 07:29:30.152271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.103 [2024-11-28 07:29:30.152282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.103 [2024-11-28 07:29:30.152291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.152836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.152987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.152995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.153006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.153015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.153025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.153034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.153044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.104 [2024-11-28 07:29:30.153052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.153062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.153071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.104 [2024-11-28 07:29:30.153081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.104 [2024-11-28 07:29:30.153105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.105 [2024-11-28 07:29:30.153124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.105 [2024-11-28 07:29:30.153160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.105 [2024-11-28 07:29:30.153179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.105 [2024-11-28 07:29:30.153200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.105 [2024-11-28 07:29:30.153236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.105 [2024-11-28 07:29:30.153256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.105 [2024-11-28 07:29:30.153277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.105 [2024-11-28 07:29:30.153297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:08.105 [2024-11-28 07:29:30.153321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.105 [2024-11-28 07:29:30.153341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.105 [2024-11-28 07:29:30.153361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.105 [2024-11-28 07:29:30.153393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.105 [2024-11-28 07:29:30.153413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.105 [2024-11-28 07:29:30.153433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.105 [2024-11-28 07:29:30.153453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.105 [2024-11-28 07:29:30.153473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98e40 is same with the state(5) to be set 00:20:08.105 [2024-11-28 07:29:30.153496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:08.105 [2024-11-28 07:29:30.153504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:08.105 [2024-11-28 07:29:30.153512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95912 len:8 PRP1 0x0 PRP2 0x0 00:20:08.105 [2024-11-28 07:29:30.153521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153574] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa98e40 was disconnected and freed. reset controller. 00:20:08.105 [2024-11-28 07:29:30.153676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.105 [2024-11-28 07:29:30.153691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.105 [2024-11-28 07:29:30.153742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.105 [2024-11-28 07:29:30.153759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.105 [2024-11-28 07:29:30.153777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.105 [2024-11-28 07:29:30.153786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992c20 is same with the state(5) to be set 00:20:08.105 [2024-11-28 07:29:30.154001] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:08.105 [2024-11-28 07:29:30.154052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x992c20 (9): Bad file descriptor 00:20:08.105 [2024-11-28 07:29:30.154179] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.105 [2024-11-28 07:29:30.154234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.105 [2024-11-28 07:29:30.154298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.105 [2024-11-28 07:29:30.154314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x992c20 with addr=10.0.0.2, port=4420 00:20:08.105 [2024-11-28 07:29:30.154325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992c20 is same with the state(5) to be set 00:20:08.105 [2024-11-28 07:29:30.154344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x992c20 (9): Bad file descriptor 00:20:08.105 [2024-11-28 07:29:30.154361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:08.105 [2024-11-28 07:29:30.154371] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:08.105 [2024-11-28 07:29:30.170044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:08.105 [2024-11-28 07:29:30.170107] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.105 [2024-11-28 07:29:30.170143] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:08.105 07:29:30 -- host/timeout.sh@101 -- # sleep 3 00:20:09.044 [2024-11-28 07:29:31.170297] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.044 [2024-11-28 07:29:31.170658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.044 [2024-11-28 07:29:31.170769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.044 [2024-11-28 07:29:31.170893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x992c20 with addr=10.0.0.2, port=4420 00:20:09.044 [2024-11-28 07:29:31.171052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992c20 is same with the state(5) to be set 00:20:09.044 [2024-11-28 07:29:31.171129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x992c20 (9): Bad file descriptor 00:20:09.044 [2024-11-28 07:29:31.171378] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.044 [2024-11-28 07:29:31.171443] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:09.044 [2024-11-28 07:29:31.171591] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.044 [2024-11-28 07:29:31.171648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.044 [2024-11-28 07:29:31.171764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:09.982 [2024-11-28 07:29:32.171979] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.982 [2024-11-28 07:29:32.172080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.982 [2024-11-28 07:29:32.172121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.982 [2024-11-28 07:29:32.172136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x992c20 with addr=10.0.0.2, port=4420 00:20:09.982 [2024-11-28 07:29:32.172176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992c20 is same with the state(5) to be set 00:20:09.982 [2024-11-28 07:29:32.172200] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x992c20 (9): Bad file descriptor 00:20:09.982 [2024-11-28 07:29:32.172219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.982 [2024-11-28 07:29:32.172227] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:09.982 [2024-11-28 07:29:32.172237] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.982 [2024-11-28 07:29:32.172262] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.982 [2024-11-28 07:29:32.172272] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:10.920 [2024-11-28 07:29:33.172528] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.920 [2024-11-28 07:29:33.172626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.920 [2024-11-28 07:29:33.172667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:10.920 [2024-11-28 07:29:33.172682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x992c20 with addr=10.0.0.2, port=4420 00:20:10.920 [2024-11-28 07:29:33.172693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992c20 is same with the state(5) to be set 00:20:10.920 [2024-11-28 07:29:33.172834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x992c20 (9): Bad file descriptor 00:20:10.920 [2024-11-28 07:29:33.172937] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:10.920 [2024-11-28 07:29:33.172948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:10.921 [2024-11-28 07:29:33.172957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:10.921 [2024-11-28 07:29:33.175237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:10.921 [2024-11-28 07:29:33.175265] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:10.921 07:29:33 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.180 [2024-11-28 07:29:33.432718] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.180 07:29:33 -- host/timeout.sh@103 -- # wait 86414 00:20:12.118 [2024-11-28 07:29:34.194740] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:17.400 00:20:17.400 Latency(us) 00:20:17.400 [2024-11-28T07:29:39.675Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.400 [2024-11-28T07:29:39.675Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:17.400 Verification LBA range: start 0x0 length 0x4000 00:20:17.400 NVMe0n1 : 10.01 7495.85 29.28 7128.94 0.00 8738.73 878.78 3019898.88 00:20:17.400 [2024-11-28T07:29:39.675Z] =================================================================================================================== 00:20:17.400 [2024-11-28T07:29:39.675Z] Total : 7495.85 29.28 7128.94 0.00 8738.73 0.00 3019898.88 00:20:17.400 0 00:20:17.401 07:29:39 -- host/timeout.sh@105 -- # killprocess 86286 00:20:17.401 07:29:39 -- common/autotest_common.sh@936 -- # '[' -z 86286 ']' 00:20:17.401 07:29:39 -- common/autotest_common.sh@940 -- # kill -0 86286 00:20:17.401 07:29:39 -- common/autotest_common.sh@941 -- # uname 00:20:17.401 07:29:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:17.401 07:29:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86286 00:20:17.401 killing process with pid 86286 00:20:17.401 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.401 00:20:17.401 Latency(us) 00:20:17.401 [2024-11-28T07:29:39.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.401 [2024-11-28T07:29:39.676Z] =================================================================================================================== 00:20:17.401 [2024-11-28T07:29:39.676Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.401 07:29:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:17.401 07:29:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:17.401 07:29:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86286' 00:20:17.401 07:29:39 -- common/autotest_common.sh@955 -- # kill 86286 00:20:17.401 07:29:39 -- common/autotest_common.sh@960 -- # wait 86286 00:20:17.401 07:29:39 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:17.401 07:29:39 -- host/timeout.sh@110 -- # bdevperf_pid=86530 00:20:17.401 07:29:39 -- host/timeout.sh@112 -- # waitforlisten 86530 /var/tmp/bdevperf.sock 00:20:17.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.401 07:29:39 -- common/autotest_common.sh@829 -- # '[' -z 86530 ']' 00:20:17.401 07:29:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.401 07:29:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:17.401 07:29:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.401 07:29:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:17.401 07:29:39 -- common/autotest_common.sh@10 -- # set +x 00:20:17.401 [2024-11-28 07:29:39.299842] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:17.401 [2024-11-28 07:29:39.300052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86530 ] 00:20:17.401 [2024-11-28 07:29:39.432071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.401 [2024-11-28 07:29:39.486473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.968 07:29:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.227 07:29:40 -- common/autotest_common.sh@862 -- # return 0 00:20:18.227 07:29:40 -- host/timeout.sh@116 -- # dtrace_pid=86545 00:20:18.227 07:29:40 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86530 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:18.227 07:29:40 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:18.486 07:29:40 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:18.486 NVMe0n1 00:20:18.745 07:29:40 -- host/timeout.sh@124 -- # rpc_pid=86588 00:20:18.745 07:29:40 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:18.745 07:29:40 -- host/timeout.sh@125 -- # sleep 1 00:20:18.745 Running I/O for 10 seconds... 00:20:19.682 07:29:41 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.944 [2024-11-28 07:29:42.027712] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028027] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028037] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028044] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028053] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028061] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028068] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028076] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028091] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028099] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028112] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028120] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028127] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028135] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028142] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028176] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028193] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028201] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028209] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028217] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028226] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028234] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028241] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028260] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028278] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028286] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028294] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028312] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028320] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028378] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028387] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028396] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028404] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028412] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028420] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028427] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028436] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028473] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028480] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028496] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028514] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028522] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.944 [2024-11-28 07:29:42.028531] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028539] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028561] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028591] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028598] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028612] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028619] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028627] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028643] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028651] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028658] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028665] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028685] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028700] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028713] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028721] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028727] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028734] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028741] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028748] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028762] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028784] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028790] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028797] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028812] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028820] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028828] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028835] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028842] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028849] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028858] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028865] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028872] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028886] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028893] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028900] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028907] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028913] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028928] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028934] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028947] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028955] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028962] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028969] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028975] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028989] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.028988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-28 07:29:42.029003] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:19.945 the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.029012] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.029019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.029019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.945 [2024-11-28 07:29:42.029026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.029031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.945 [2024-11-28 07:29:42.029034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.945 [2024-11-28 07:29:42.029040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.946 [2024-11-28 07:29:42.029049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.946 [2024-11-28 07:29:42.029051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.946 [2024-11-28 07:29:42.029058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029060] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.946 [2024-11-28 07:29:42.029067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:19.946 [2024-11-28 07:29:42.029069] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with the state(5) to be set 00:20:19.946 [2024-11-28 07:29:42.029076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-28 07:29:42.029077] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187d2f0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 the state(5) to be set 00:20:19.946 [2024-11-28 07:29:42.029086] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fbea0 is same with the state(5) to be set 00:20:19.946 [2024-11-28 07:29:42.029162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.946 [2024-11-28 07:29:42.029619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.946 [2024-11-28 07:29:42.029626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:31184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.029985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.029999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.030007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.030017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.030024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.030034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.030041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.030050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.030059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.030068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.030076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.030085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.030093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.030102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.030109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.030118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.030126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.030135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.030142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.947 [2024-11-28 07:29:42.030151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.947 [2024-11-28 07:29:42.030158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.948 [2024-11-28 07:29:42.030696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.948 [2024-11-28 07:29:42.030703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.030990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.030997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.031006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.031013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.031028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.031036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.031045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.031052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.031062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.031069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.031079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.031086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.031095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.031102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.031116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.949 [2024-11-28 07:29:42.031124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.949 [2024-11-28 07:29:42.031133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.950 [2024-11-28 07:29:42.031394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2e070 is same with the state(5) to be set 00:20:19.950 [2024-11-28 07:29:42.031414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:19.950 [2024-11-28 07:29:42.031420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:19.950 [2024-11-28 07:29:42.031426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120640 len:8 PRP1 0x0 PRP2 0x0 00:20:19.950 [2024-11-28 07:29:42.031434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.950 [2024-11-28 07:29:42.031482] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a2e070 was disconnected and freed. reset controller. 00:20:19.950 [2024-11-28 07:29:42.031739] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:19.950 [2024-11-28 07:29:42.031783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fbea0 (9): Bad file descriptor 00:20:19.950 [2024-11-28 07:29:42.031909] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.950 [2024-11-28 07:29:42.031977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.950 [2024-11-28 07:29:42.032017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:19.950 [2024-11-28 07:29:42.032032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19fbea0 with addr=10.0.0.2, port=4420 00:20:19.950 [2024-11-28 07:29:42.032042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fbea0 is same with the state(5) to be set 00:20:19.950 [2024-11-28 07:29:42.032059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fbea0 (9): Bad file descriptor 00:20:19.950 [2024-11-28 07:29:42.032075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.950 [2024-11-28 07:29:42.032083] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:19.950 07:29:42 -- host/timeout.sh@128 -- # wait 86588 00:20:19.950 [2024-11-28 07:29:42.048056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.950 [2024-11-28 07:29:42.048285] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:19.950 [2024-11-28 07:29:42.048464] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:21.853 [2024-11-28 07:29:44.048741] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.853 [2024-11-28 07:29:44.049012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.853 [2024-11-28 07:29:44.049107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:21.853 [2024-11-28 07:29:44.049212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19fbea0 with addr=10.0.0.2, port=4420 00:20:21.853 [2024-11-28 07:29:44.049376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fbea0 is same with the state(5) to be set 00:20:21.853 [2024-11-28 07:29:44.049526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fbea0 (9): Bad file descriptor 00:20:21.853 [2024-11-28 07:29:44.049675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.853 [2024-11-28 07:29:44.049808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:21.853 [2024-11-28 07:29:44.049929] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.853 [2024-11-28 07:29:44.050071] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:21.853 [2024-11-28 07:29:44.050121] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:24.399 [2024-11-28 07:29:46.050418] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.399 [2024-11-28 07:29:46.050679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.399 [2024-11-28 07:29:46.050775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.399 [2024-11-28 07:29:46.050923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19fbea0 with addr=10.0.0.2, port=4420 00:20:24.399 [2024-11-28 07:29:46.051071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fbea0 is same with the state(5) to be set 00:20:24.399 [2024-11-28 07:29:46.051239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fbea0 (9): Bad file descriptor 00:20:24.399 [2024-11-28 07:29:46.051403] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:24.399 [2024-11-28 07:29:46.051423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:24.399 [2024-11-28 07:29:46.051432] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:24.399 [2024-11-28 07:29:46.051453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:24.399 [2024-11-28 07:29:46.051463] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:25.777 [2024-11-28 07:29:48.051510] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:25.777 [2024-11-28 07:29:48.051540] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:25.777 [2024-11-28 07:29:48.051565] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:25.777 [2024-11-28 07:29:48.051573] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:25.777 [2024-11-28 07:29:48.051590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:27.156 00:20:27.156 Latency(us) 00:20:27.156 [2024-11-28T07:29:49.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.156 [2024-11-28T07:29:49.431Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:27.156 NVMe0n1 : 8.17 2656.74 10.38 15.66 0.00 47831.66 6494.02 7046430.72 00:20:27.156 [2024-11-28T07:29:49.431Z] =================================================================================================================== 00:20:27.156 [2024-11-28T07:29:49.431Z] Total : 2656.74 10.38 15.66 0.00 47831.66 6494.02 7046430.72 00:20:27.156 0 00:20:27.156 07:29:49 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:27.156 Attaching 5 probes... 00:20:27.156 1267.804966: reset bdev controller NVMe0 00:20:27.156 1267.906810: reconnect bdev controller NVMe0 00:20:27.156 3284.747095: reconnect delay bdev controller NVMe0 00:20:27.156 3284.763491: reconnect bdev controller NVMe0 00:20:27.156 5286.433334: reconnect delay bdev controller NVMe0 00:20:27.156 5286.448959: reconnect bdev controller NVMe0 00:20:27.156 7287.572069: reconnect delay bdev controller NVMe0 00:20:27.156 7287.586159: reconnect bdev controller NVMe0 00:20:27.156 07:29:49 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:27.156 07:29:49 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:27.156 07:29:49 -- host/timeout.sh@136 -- # kill 86545 00:20:27.156 07:29:49 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:27.156 07:29:49 -- host/timeout.sh@139 -- # killprocess 86530 00:20:27.156 07:29:49 -- common/autotest_common.sh@936 -- # '[' -z 86530 ']' 00:20:27.156 07:29:49 -- common/autotest_common.sh@940 -- # kill -0 86530 00:20:27.156 07:29:49 -- common/autotest_common.sh@941 -- # uname 00:20:27.156 07:29:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:27.156 07:29:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86530 00:20:27.156 killing process with pid 86530 00:20:27.156 Received shutdown signal, test time was about 8.239220 seconds 00:20:27.156 00:20:27.156 Latency(us) 00:20:27.156 [2024-11-28T07:29:49.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.156 [2024-11-28T07:29:49.431Z] =================================================================================================================== 00:20:27.156 [2024-11-28T07:29:49.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:27.156 07:29:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:27.156 07:29:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:27.156 07:29:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86530' 00:20:27.156 07:29:49 -- common/autotest_common.sh@955 -- # kill 86530 00:20:27.156 07:29:49 -- common/autotest_common.sh@960 -- # wait 86530 00:20:27.156 07:29:49 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.414 07:29:49 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:27.414 07:29:49 -- host/timeout.sh@145 -- # nvmftestfini 00:20:27.414 07:29:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:27.414 07:29:49 -- nvmf/common.sh@116 -- # sync 00:20:27.414 07:29:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:27.414 07:29:49 -- nvmf/common.sh@119 -- # set +e 00:20:27.414 07:29:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:27.414 07:29:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:27.414 rmmod nvme_tcp 00:20:27.414 rmmod nvme_fabrics 00:20:27.414 rmmod nvme_keyring 00:20:27.414 07:29:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:27.414 07:29:49 -- nvmf/common.sh@123 -- # set -e 00:20:27.414 07:29:49 -- nvmf/common.sh@124 -- # return 0 00:20:27.414 07:29:49 -- nvmf/common.sh@477 -- # '[' -n 86085 ']' 00:20:27.414 07:29:49 -- nvmf/common.sh@478 -- # killprocess 86085 00:20:27.414 07:29:49 -- common/autotest_common.sh@936 -- # '[' -z 86085 ']' 00:20:27.414 07:29:49 -- common/autotest_common.sh@940 -- # kill -0 86085 00:20:27.414 07:29:49 -- common/autotest_common.sh@941 -- # uname 00:20:27.414 07:29:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:27.414 07:29:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86085 00:20:27.414 07:29:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:27.414 killing process with pid 86085 00:20:27.414 07:29:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:27.414 07:29:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86085' 00:20:27.414 07:29:49 -- common/autotest_common.sh@955 -- # kill 86085 00:20:27.414 07:29:49 -- common/autotest_common.sh@960 -- # wait 86085 00:20:27.673 07:29:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:27.673 07:29:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:27.673 07:29:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:27.673 07:29:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.673 07:29:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:27.673 07:29:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.673 07:29:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.673 07:29:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.673 07:29:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:27.673 00:20:27.673 real 0m46.978s 00:20:27.673 user 2m17.630s 00:20:27.673 sys 0m5.647s 00:20:27.673 07:29:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:27.673 07:29:49 -- common/autotest_common.sh@10 -- # set +x 00:20:27.673 ************************************ 00:20:27.673 END TEST nvmf_timeout 00:20:27.673 ************************************ 00:20:27.933 07:29:49 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:20:27.933 07:29:49 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:20:27.933 07:29:49 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.933 07:29:49 -- common/autotest_common.sh@10 -- # set +x 00:20:27.933 07:29:50 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:20:27.933 ************************************ 00:20:27.933 END TEST nvmf_tcp 00:20:27.933 ************************************ 00:20:27.933 00:20:27.933 real 10m48.703s 00:20:27.933 user 30m14.216s 00:20:27.933 sys 3m19.882s 00:20:27.933 07:29:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:27.933 07:29:50 -- common/autotest_common.sh@10 -- # set +x 00:20:27.933 07:29:50 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:20:27.933 07:29:50 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:27.933 07:29:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:27.933 07:29:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:27.933 07:29:50 -- common/autotest_common.sh@10 -- # set +x 00:20:27.933 ************************************ 00:20:27.933 START TEST nvmf_dif 00:20:27.933 ************************************ 00:20:27.933 07:29:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:27.933 * Looking for test storage... 00:20:27.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:27.933 07:29:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:27.933 07:29:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:27.933 07:29:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:28.191 07:29:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:28.192 07:29:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:28.192 07:29:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:28.192 07:29:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:28.192 07:29:50 -- scripts/common.sh@335 -- # IFS=.-: 00:20:28.192 07:29:50 -- scripts/common.sh@335 -- # read -ra ver1 00:20:28.192 07:29:50 -- scripts/common.sh@336 -- # IFS=.-: 00:20:28.192 07:29:50 -- scripts/common.sh@336 -- # read -ra ver2 00:20:28.192 07:29:50 -- scripts/common.sh@337 -- # local 'op=<' 00:20:28.192 07:29:50 -- scripts/common.sh@339 -- # ver1_l=2 00:20:28.192 07:29:50 -- scripts/common.sh@340 -- # ver2_l=1 00:20:28.192 07:29:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:28.192 07:29:50 -- scripts/common.sh@343 -- # case "$op" in 00:20:28.192 07:29:50 -- scripts/common.sh@344 -- # : 1 00:20:28.192 07:29:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:28.192 07:29:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:28.192 07:29:50 -- scripts/common.sh@364 -- # decimal 1 00:20:28.192 07:29:50 -- scripts/common.sh@352 -- # local d=1 00:20:28.192 07:29:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:28.192 07:29:50 -- scripts/common.sh@354 -- # echo 1 00:20:28.192 07:29:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:28.192 07:29:50 -- scripts/common.sh@365 -- # decimal 2 00:20:28.192 07:29:50 -- scripts/common.sh@352 -- # local d=2 00:20:28.192 07:29:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:28.192 07:29:50 -- scripts/common.sh@354 -- # echo 2 00:20:28.192 07:29:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:28.192 07:29:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:28.192 07:29:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:28.192 07:29:50 -- scripts/common.sh@367 -- # return 0 00:20:28.192 07:29:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:28.192 07:29:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:28.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.192 --rc genhtml_branch_coverage=1 00:20:28.192 --rc genhtml_function_coverage=1 00:20:28.192 --rc genhtml_legend=1 00:20:28.192 --rc geninfo_all_blocks=1 00:20:28.192 --rc geninfo_unexecuted_blocks=1 00:20:28.192 00:20:28.192 ' 00:20:28.192 07:29:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:28.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.192 --rc genhtml_branch_coverage=1 00:20:28.192 --rc genhtml_function_coverage=1 00:20:28.192 --rc genhtml_legend=1 00:20:28.192 --rc geninfo_all_blocks=1 00:20:28.192 --rc geninfo_unexecuted_blocks=1 00:20:28.192 00:20:28.192 ' 00:20:28.192 07:29:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:28.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.192 --rc genhtml_branch_coverage=1 00:20:28.192 --rc genhtml_function_coverage=1 00:20:28.192 --rc genhtml_legend=1 00:20:28.192 --rc geninfo_all_blocks=1 00:20:28.192 --rc geninfo_unexecuted_blocks=1 00:20:28.192 00:20:28.192 ' 00:20:28.192 07:29:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:28.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:28.192 --rc genhtml_branch_coverage=1 00:20:28.192 --rc genhtml_function_coverage=1 00:20:28.192 --rc genhtml_legend=1 00:20:28.192 --rc geninfo_all_blocks=1 00:20:28.192 --rc geninfo_unexecuted_blocks=1 00:20:28.192 00:20:28.192 ' 00:20:28.192 07:29:50 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:28.192 07:29:50 -- nvmf/common.sh@7 -- # uname -s 00:20:28.192 07:29:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.192 07:29:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.192 07:29:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.192 07:29:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.192 07:29:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.192 07:29:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.192 07:29:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.192 07:29:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.192 07:29:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.192 07:29:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.192 07:29:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:20:28.192 07:29:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:20:28.192 07:29:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.192 07:29:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.192 07:29:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:28.192 07:29:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:28.192 07:29:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.192 07:29:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.192 07:29:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.192 07:29:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.192 07:29:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.192 07:29:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.192 07:29:50 -- paths/export.sh@5 -- # export PATH 00:20:28.192 07:29:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.192 07:29:50 -- nvmf/common.sh@46 -- # : 0 00:20:28.192 07:29:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:28.192 07:29:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:28.192 07:29:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:28.192 07:29:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.192 07:29:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.192 07:29:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:28.192 07:29:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:28.192 07:29:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:28.192 07:29:50 -- target/dif.sh@15 -- # NULL_META=16 00:20:28.192 07:29:50 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:28.192 07:29:50 -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:28.192 07:29:50 -- target/dif.sh@15 -- # NULL_DIF=1 00:20:28.192 07:29:50 -- target/dif.sh@135 -- # nvmftestinit 00:20:28.192 07:29:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:28.192 07:29:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.192 07:29:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:28.192 07:29:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:28.192 07:29:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:28.192 07:29:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.192 07:29:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:28.192 07:29:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.192 07:29:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:28.192 07:29:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:28.192 07:29:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:28.192 07:29:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:28.192 07:29:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:28.192 07:29:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:28.192 07:29:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.192 07:29:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.192 07:29:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:28.192 07:29:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:28.192 07:29:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:28.192 07:29:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:28.192 07:29:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:28.192 07:29:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.192 07:29:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:28.192 07:29:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:28.192 07:29:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:28.192 07:29:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:28.192 07:29:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:28.192 07:29:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:28.192 Cannot find device "nvmf_tgt_br" 00:20:28.192 07:29:50 -- nvmf/common.sh@154 -- # true 00:20:28.192 07:29:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:28.192 Cannot find device "nvmf_tgt_br2" 00:20:28.192 07:29:50 -- nvmf/common.sh@155 -- # true 00:20:28.192 07:29:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:28.192 07:29:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:28.192 Cannot find device "nvmf_tgt_br" 00:20:28.192 07:29:50 -- nvmf/common.sh@157 -- # true 00:20:28.192 07:29:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:28.192 Cannot find device "nvmf_tgt_br2" 00:20:28.192 07:29:50 -- nvmf/common.sh@158 -- # true 00:20:28.192 07:29:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:28.192 07:29:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:28.192 07:29:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:28.192 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:28.192 07:29:50 -- nvmf/common.sh@161 -- # true 00:20:28.193 07:29:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:28.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:28.193 07:29:50 -- nvmf/common.sh@162 -- # true 00:20:28.193 07:29:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:28.193 07:29:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:28.193 07:29:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:28.193 07:29:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:28.193 07:29:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:28.193 07:29:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:28.193 07:29:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:28.193 07:29:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:28.193 07:29:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:28.193 07:29:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:28.193 07:29:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:28.193 07:29:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:28.193 07:29:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:28.193 07:29:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:28.193 07:29:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:28.453 07:29:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:28.453 07:29:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:28.453 07:29:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:28.453 07:29:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:28.453 07:29:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:28.453 07:29:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:28.453 07:29:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:28.453 07:29:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:28.453 07:29:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:28.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:20:28.453 00:20:28.453 --- 10.0.0.2 ping statistics --- 00:20:28.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.453 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:28.453 07:29:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:28.453 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:28.453 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:20:28.453 00:20:28.453 --- 10.0.0.3 ping statistics --- 00:20:28.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.453 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:20:28.453 07:29:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:28.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:28.453 00:20:28.453 --- 10.0.0.1 ping statistics --- 00:20:28.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.453 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:28.453 07:29:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.453 07:29:50 -- nvmf/common.sh@421 -- # return 0 00:20:28.453 07:29:50 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:20:28.453 07:29:50 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:28.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:28.712 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:28.712 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:28.712 07:29:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.712 07:29:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:28.713 07:29:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:28.713 07:29:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.713 07:29:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:28.713 07:29:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:28.713 07:29:50 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:28.713 07:29:50 -- target/dif.sh@137 -- # nvmfappstart 00:20:28.713 07:29:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:28.713 07:29:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.713 07:29:50 -- common/autotest_common.sh@10 -- # set +x 00:20:28.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.713 07:29:50 -- nvmf/common.sh@469 -- # nvmfpid=87036 00:20:28.713 07:29:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:28.713 07:29:50 -- nvmf/common.sh@470 -- # waitforlisten 87036 00:20:28.713 07:29:50 -- common/autotest_common.sh@829 -- # '[' -z 87036 ']' 00:20:28.713 07:29:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.713 07:29:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.713 07:29:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.713 07:29:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.713 07:29:50 -- common/autotest_common.sh@10 -- # set +x 00:20:28.971 [2024-11-28 07:29:51.026048] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:28.971 [2024-11-28 07:29:51.026141] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.971 [2024-11-28 07:29:51.168446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.971 [2024-11-28 07:29:51.232574] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:28.971 [2024-11-28 07:29:51.232753] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.971 [2024-11-28 07:29:51.232772] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.971 [2024-11-28 07:29:51.232784] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.971 [2024-11-28 07:29:51.232826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.909 07:29:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.909 07:29:52 -- common/autotest_common.sh@862 -- # return 0 00:20:29.909 07:29:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:29.909 07:29:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.909 07:29:52 -- common/autotest_common.sh@10 -- # set +x 00:20:29.909 07:29:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.909 07:29:52 -- target/dif.sh@139 -- # create_transport 00:20:29.909 07:29:52 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:29.909 07:29:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.909 07:29:52 -- common/autotest_common.sh@10 -- # set +x 00:20:29.909 [2024-11-28 07:29:52.073168] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.909 07:29:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.909 07:29:52 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:29.909 07:29:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:29.909 07:29:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:29.909 07:29:52 -- common/autotest_common.sh@10 -- # set +x 00:20:29.909 ************************************ 00:20:29.909 START TEST fio_dif_1_default 00:20:29.909 ************************************ 00:20:29.909 07:29:52 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:20:29.909 07:29:52 -- target/dif.sh@86 -- # create_subsystems 0 00:20:29.909 07:29:52 -- target/dif.sh@28 -- # local sub 00:20:29.909 07:29:52 -- target/dif.sh@30 -- # for sub in "$@" 00:20:29.909 07:29:52 -- target/dif.sh@31 -- # create_subsystem 0 00:20:29.909 07:29:52 -- target/dif.sh@18 -- # local sub_id=0 00:20:29.909 07:29:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:29.909 07:29:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.909 07:29:52 -- common/autotest_common.sh@10 -- # set +x 00:20:29.909 bdev_null0 00:20:29.909 07:29:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.909 07:29:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:29.909 07:29:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.909 07:29:52 -- common/autotest_common.sh@10 -- # set +x 00:20:29.909 07:29:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.909 07:29:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:29.909 07:29:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.909 07:29:52 -- common/autotest_common.sh@10 -- # set +x 00:20:29.909 07:29:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.909 07:29:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:29.909 07:29:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.909 07:29:52 -- common/autotest_common.sh@10 -- # set +x 00:20:29.909 [2024-11-28 07:29:52.121293] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.909 07:29:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.909 07:29:52 -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:29.909 07:29:52 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:29.909 07:29:52 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:29.909 07:29:52 -- nvmf/common.sh@520 -- # config=() 00:20:29.909 07:29:52 -- nvmf/common.sh@520 -- # local subsystem config 00:20:29.909 07:29:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:29.909 07:29:52 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.909 07:29:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:29.909 { 00:20:29.909 "params": { 00:20:29.909 "name": "Nvme$subsystem", 00:20:29.909 "trtype": "$TEST_TRANSPORT", 00:20:29.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.910 "adrfam": "ipv4", 00:20:29.910 "trsvcid": "$NVMF_PORT", 00:20:29.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.910 "hdgst": ${hdgst:-false}, 00:20:29.910 "ddgst": ${ddgst:-false} 00:20:29.910 }, 00:20:29.910 "method": "bdev_nvme_attach_controller" 00:20:29.910 } 00:20:29.910 EOF 00:20:29.910 )") 00:20:29.910 07:29:52 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.910 07:29:52 -- target/dif.sh@82 -- # gen_fio_conf 00:20:29.910 07:29:52 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:29.910 07:29:52 -- target/dif.sh@54 -- # local file 00:20:29.910 07:29:52 -- target/dif.sh@56 -- # cat 00:20:29.910 07:29:52 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:29.910 07:29:52 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:29.910 07:29:52 -- nvmf/common.sh@542 -- # cat 00:20:29.910 07:29:52 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.910 07:29:52 -- common/autotest_common.sh@1330 -- # shift 00:20:29.910 07:29:52 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:29.910 07:29:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.910 07:29:52 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:29.910 07:29:52 -- target/dif.sh@72 -- # (( file <= files )) 00:20:29.910 07:29:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.910 07:29:52 -- nvmf/common.sh@544 -- # jq . 00:20:29.910 07:29:52 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:29.910 07:29:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:29.910 07:29:52 -- nvmf/common.sh@545 -- # IFS=, 00:20:29.910 07:29:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:29.910 "params": { 00:20:29.910 "name": "Nvme0", 00:20:29.910 "trtype": "tcp", 00:20:29.910 "traddr": "10.0.0.2", 00:20:29.910 "adrfam": "ipv4", 00:20:29.910 "trsvcid": "4420", 00:20:29.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:29.910 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:29.910 "hdgst": false, 00:20:29.910 "ddgst": false 00:20:29.910 }, 00:20:29.910 "method": "bdev_nvme_attach_controller" 00:20:29.910 }' 00:20:29.910 07:29:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:29.910 07:29:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:29.910 07:29:52 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.910 07:29:52 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.910 07:29:52 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:29.910 07:29:52 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:30.170 07:29:52 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:30.170 07:29:52 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:30.170 07:29:52 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:30.170 07:29:52 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:30.170 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:30.170 fio-3.35 00:20:30.170 Starting 1 thread 00:20:30.739 [2024-11-28 07:29:52.708736] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:30.739 [2024-11-28 07:29:52.708806] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:40.751 00:20:40.751 filename0: (groupid=0, jobs=1): err= 0: pid=87104: Thu Nov 28 07:30:02 2024 00:20:40.751 read: IOPS=11.0k, BW=42.9MiB/s (45.0MB/s)(429MiB/10001msec) 00:20:40.751 slat (usec): min=5, max=132, avg= 6.91, stdev= 2.36 00:20:40.751 clat (usec): min=310, max=4529, avg=343.62, stdev=39.09 00:20:40.751 lat (usec): min=316, max=4555, avg=350.53, stdev=39.52 00:20:40.751 clat percentiles (usec): 00:20:40.751 | 1.00th=[ 314], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 326], 00:20:40.751 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:20:40.751 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 388], 00:20:40.751 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 498], 99.95th=[ 537], 00:20:40.751 | 99.99th=[ 1958] 00:20:40.751 bw ( KiB/s): min=41120, max=44672, per=100.00%, avg=43984.00, stdev=870.15, samples=19 00:20:40.751 iops : min=10280, max=11168, avg=10996.21, stdev=217.68, samples=19 00:20:40.751 lat (usec) : 500=99.90%, 750=0.07%, 1000=0.01% 00:20:40.751 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:20:40.751 cpu : usr=83.88%, sys=14.19%, ctx=22, majf=0, minf=0 00:20:40.751 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:40.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.751 issued rwts: total=109828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.751 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:40.751 00:20:40.751 Run status group 0 (all jobs): 00:20:40.751 READ: bw=42.9MiB/s (45.0MB/s), 42.9MiB/s-42.9MiB/s (45.0MB/s-45.0MB/s), io=429MiB (450MB), run=10001-10001msec 00:20:40.751 07:30:03 -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:40.751 07:30:03 -- target/dif.sh@43 -- # local sub 00:20:40.751 07:30:03 -- target/dif.sh@45 -- # for sub in "$@" 00:20:40.751 07:30:03 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:40.751 07:30:03 -- target/dif.sh@36 -- # local sub_id=0 00:20:40.751 07:30:03 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:40.751 07:30:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.751 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 07:30:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.011 07:30:03 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:41.011 07:30:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.011 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 ************************************ 00:20:41.011 END TEST fio_dif_1_default 00:20:41.011 ************************************ 00:20:41.011 07:30:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.011 00:20:41.011 real 0m10.952s 00:20:41.011 user 0m8.999s 00:20:41.011 sys 0m1.673s 00:20:41.011 07:30:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:41.011 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 07:30:03 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:41.011 07:30:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:41.011 07:30:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:41.011 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 ************************************ 00:20:41.011 START TEST fio_dif_1_multi_subsystems 00:20:41.011 ************************************ 00:20:41.011 07:30:03 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:20:41.011 07:30:03 -- target/dif.sh@92 -- # local files=1 00:20:41.011 07:30:03 -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:41.011 07:30:03 -- target/dif.sh@28 -- # local sub 00:20:41.011 07:30:03 -- target/dif.sh@30 -- # for sub in "$@" 00:20:41.011 07:30:03 -- target/dif.sh@31 -- # create_subsystem 0 00:20:41.011 07:30:03 -- target/dif.sh@18 -- # local sub_id=0 00:20:41.011 07:30:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:41.011 07:30:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.011 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 bdev_null0 00:20:41.011 07:30:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.011 07:30:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:41.011 07:30:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.011 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 07:30:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.011 07:30:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:41.011 07:30:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.011 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 07:30:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.011 07:30:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:41.011 07:30:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.011 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 [2024-11-28 07:30:03.125754] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.011 07:30:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.011 07:30:03 -- target/dif.sh@30 -- # for sub in "$@" 00:20:41.011 07:30:03 -- target/dif.sh@31 -- # create_subsystem 1 00:20:41.011 07:30:03 -- target/dif.sh@18 -- # local sub_id=1 00:20:41.011 07:30:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:41.011 07:30:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.011 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 bdev_null1 00:20:41.011 07:30:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.011 07:30:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:41.011 07:30:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.011 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 07:30:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.011 07:30:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:41.011 07:30:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.011 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 07:30:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.011 07:30:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.011 07:30:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.011 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:20:41.011 07:30:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.011 07:30:03 -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:41.011 07:30:03 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:41.011 07:30:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:41.011 07:30:03 -- nvmf/common.sh@520 -- # config=() 00:20:41.011 07:30:03 -- nvmf/common.sh@520 -- # local subsystem config 00:20:41.011 07:30:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:41.012 07:30:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.012 07:30:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:41.012 { 00:20:41.012 "params": { 00:20:41.012 "name": "Nvme$subsystem", 00:20:41.012 "trtype": "$TEST_TRANSPORT", 00:20:41.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.012 "adrfam": "ipv4", 00:20:41.012 "trsvcid": "$NVMF_PORT", 00:20:41.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.012 "hdgst": ${hdgst:-false}, 00:20:41.012 "ddgst": ${ddgst:-false} 00:20:41.012 }, 00:20:41.012 "method": "bdev_nvme_attach_controller" 00:20:41.012 } 00:20:41.012 EOF 00:20:41.012 )") 00:20:41.012 07:30:03 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.012 07:30:03 -- target/dif.sh@82 -- # gen_fio_conf 00:20:41.012 07:30:03 -- target/dif.sh@54 -- # local file 00:20:41.012 07:30:03 -- target/dif.sh@56 -- # cat 00:20:41.012 07:30:03 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:41.012 07:30:03 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:41.012 07:30:03 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:41.012 07:30:03 -- nvmf/common.sh@542 -- # cat 00:20:41.012 07:30:03 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.012 07:30:03 -- common/autotest_common.sh@1330 -- # shift 00:20:41.012 07:30:03 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:41.012 07:30:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.012 07:30:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:41.012 07:30:03 -- target/dif.sh@72 -- # (( file <= files )) 00:20:41.012 07:30:03 -- target/dif.sh@73 -- # cat 00:20:41.012 07:30:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.012 07:30:03 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:41.012 07:30:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:41.012 07:30:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:41.012 07:30:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:41.012 { 00:20:41.012 "params": { 00:20:41.012 "name": "Nvme$subsystem", 00:20:41.012 "trtype": "$TEST_TRANSPORT", 00:20:41.012 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.012 "adrfam": "ipv4", 00:20:41.012 "trsvcid": "$NVMF_PORT", 00:20:41.012 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.012 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.012 "hdgst": ${hdgst:-false}, 00:20:41.012 "ddgst": ${ddgst:-false} 00:20:41.012 }, 00:20:41.012 "method": "bdev_nvme_attach_controller" 00:20:41.012 } 00:20:41.012 EOF 00:20:41.012 )") 00:20:41.012 07:30:03 -- nvmf/common.sh@542 -- # cat 00:20:41.012 07:30:03 -- target/dif.sh@72 -- # (( file++ )) 00:20:41.012 07:30:03 -- target/dif.sh@72 -- # (( file <= files )) 00:20:41.012 07:30:03 -- nvmf/common.sh@544 -- # jq . 00:20:41.012 07:30:03 -- nvmf/common.sh@545 -- # IFS=, 00:20:41.012 07:30:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:41.012 "params": { 00:20:41.012 "name": "Nvme0", 00:20:41.012 "trtype": "tcp", 00:20:41.012 "traddr": "10.0.0.2", 00:20:41.012 "adrfam": "ipv4", 00:20:41.012 "trsvcid": "4420", 00:20:41.012 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:41.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:41.012 "hdgst": false, 00:20:41.012 "ddgst": false 00:20:41.012 }, 00:20:41.012 "method": "bdev_nvme_attach_controller" 00:20:41.012 },{ 00:20:41.012 "params": { 00:20:41.012 "name": "Nvme1", 00:20:41.012 "trtype": "tcp", 00:20:41.012 "traddr": "10.0.0.2", 00:20:41.012 "adrfam": "ipv4", 00:20:41.012 "trsvcid": "4420", 00:20:41.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.012 "hdgst": false, 00:20:41.012 "ddgst": false 00:20:41.012 }, 00:20:41.012 "method": "bdev_nvme_attach_controller" 00:20:41.012 }' 00:20:41.012 07:30:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:41.012 07:30:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:41.012 07:30:03 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.012 07:30:03 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.012 07:30:03 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:41.012 07:30:03 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:41.012 07:30:03 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:41.012 07:30:03 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:41.012 07:30:03 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:41.012 07:30:03 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.271 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:41.271 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:41.271 fio-3.35 00:20:41.271 Starting 2 threads 00:20:41.839 [2024-11-28 07:30:03.813106] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:41.839 [2024-11-28 07:30:03.813154] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:51.819 00:20:51.819 filename0: (groupid=0, jobs=1): err= 0: pid=87262: Thu Nov 28 07:30:13 2024 00:20:51.819 read: IOPS=5192, BW=20.3MiB/s (21.3MB/s)(203MiB/10001msec) 00:20:51.819 slat (nsec): min=5772, max=88204, avg=21060.46, stdev=7109.92 00:20:51.819 clat (usec): min=541, max=2993, avg=715.58, stdev=59.72 00:20:51.819 lat (usec): min=552, max=3018, avg=736.64, stdev=60.69 00:20:51.819 clat percentiles (usec): 00:20:51.819 | 1.00th=[ 611], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 668], 00:20:51.819 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[ 709], 60.00th=[ 725], 00:20:51.819 | 70.00th=[ 742], 80.00th=[ 758], 90.00th=[ 783], 95.00th=[ 807], 00:20:51.819 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 1004], 99.95th=[ 1123], 00:20:51.819 | 99.99th=[ 2442] 00:20:51.819 bw ( KiB/s): min=20192, max=21248, per=50.01%, avg=20776.42, stdev=294.03, samples=19 00:20:51.819 iops : min= 5048, max= 5312, avg=5194.11, stdev=73.51, samples=19 00:20:51.819 lat (usec) : 750=76.68%, 1000=23.22% 00:20:51.819 lat (msec) : 2=0.09%, 4=0.02% 00:20:51.819 cpu : usr=92.72%, sys=6.08%, ctx=11, majf=0, minf=0 00:20:51.819 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.819 issued rwts: total=51932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.819 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:51.819 filename1: (groupid=0, jobs=1): err= 0: pid=87263: Thu Nov 28 07:30:13 2024 00:20:51.819 read: IOPS=5192, BW=20.3MiB/s (21.3MB/s)(203MiB/10001msec) 00:20:51.819 slat (usec): min=5, max=135, avg=21.20, stdev= 7.31 00:20:51.820 clat (usec): min=454, max=2989, avg=713.17, stdev=56.97 00:20:51.820 lat (usec): min=462, max=3023, avg=734.36, stdev=58.35 00:20:51.820 clat percentiles (usec): 00:20:51.820 | 1.00th=[ 619], 5.00th=[ 644], 10.00th=[ 652], 20.00th=[ 668], 00:20:51.820 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 709], 60.00th=[ 717], 00:20:51.820 | 70.00th=[ 734], 80.00th=[ 750], 90.00th=[ 775], 95.00th=[ 799], 00:20:51.820 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 996], 99.95th=[ 1090], 00:20:51.820 | 99.99th=[ 2409] 00:20:51.820 bw ( KiB/s): min=20192, max=21248, per=50.01%, avg=20776.42, stdev=294.03, samples=19 00:20:51.820 iops : min= 5048, max= 5312, avg=5194.11, stdev=73.51, samples=19 00:20:51.820 lat (usec) : 500=0.01%, 750=79.22%, 1000=20.69% 00:20:51.820 lat (msec) : 2=0.07%, 4=0.02% 00:20:51.820 cpu : usr=92.93%, sys=5.80%, ctx=35, majf=0, minf=0 00:20:51.820 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.820 issued rwts: total=51932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.820 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:51.820 00:20:51.820 Run status group 0 (all jobs): 00:20:51.820 READ: bw=40.6MiB/s (42.5MB/s), 20.3MiB/s-20.3MiB/s (21.3MB/s-21.3MB/s), io=406MiB (425MB), run=10001-10001msec 00:20:52.080 07:30:14 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:52.080 07:30:14 -- target/dif.sh@43 -- # local sub 00:20:52.080 07:30:14 -- target/dif.sh@45 -- # for sub in "$@" 00:20:52.080 07:30:14 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:52.080 07:30:14 -- target/dif.sh@36 -- # local sub_id=0 00:20:52.080 07:30:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:52.080 07:30:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.080 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.080 07:30:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.080 07:30:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:52.080 07:30:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.080 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.080 07:30:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.080 07:30:14 -- target/dif.sh@45 -- # for sub in "$@" 00:20:52.080 07:30:14 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:52.080 07:30:14 -- target/dif.sh@36 -- # local sub_id=1 00:20:52.080 07:30:14 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.080 07:30:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.080 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.080 07:30:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.080 07:30:14 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:52.080 07:30:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.080 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.080 ************************************ 00:20:52.080 END TEST fio_dif_1_multi_subsystems 00:20:52.080 ************************************ 00:20:52.080 07:30:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.080 00:20:52.080 real 0m11.082s 00:20:52.080 user 0m19.250s 00:20:52.080 sys 0m1.486s 00:20:52.080 07:30:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:52.080 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.080 07:30:14 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:52.080 07:30:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:52.080 07:30:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:52.080 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.080 ************************************ 00:20:52.080 START TEST fio_dif_rand_params 00:20:52.080 ************************************ 00:20:52.080 07:30:14 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:20:52.080 07:30:14 -- target/dif.sh@100 -- # local NULL_DIF 00:20:52.080 07:30:14 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:52.080 07:30:14 -- target/dif.sh@103 -- # NULL_DIF=3 00:20:52.080 07:30:14 -- target/dif.sh@103 -- # bs=128k 00:20:52.080 07:30:14 -- target/dif.sh@103 -- # numjobs=3 00:20:52.080 07:30:14 -- target/dif.sh@103 -- # iodepth=3 00:20:52.080 07:30:14 -- target/dif.sh@103 -- # runtime=5 00:20:52.080 07:30:14 -- target/dif.sh@105 -- # create_subsystems 0 00:20:52.080 07:30:14 -- target/dif.sh@28 -- # local sub 00:20:52.080 07:30:14 -- target/dif.sh@30 -- # for sub in "$@" 00:20:52.080 07:30:14 -- target/dif.sh@31 -- # create_subsystem 0 00:20:52.080 07:30:14 -- target/dif.sh@18 -- # local sub_id=0 00:20:52.080 07:30:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:52.080 07:30:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.080 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.080 bdev_null0 00:20:52.080 07:30:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.080 07:30:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:52.080 07:30:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.080 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.080 07:30:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.080 07:30:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:52.080 07:30:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.080 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.080 07:30:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.080 07:30:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:52.080 07:30:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.080 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:20:52.080 [2024-11-28 07:30:14.271296] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.080 07:30:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.080 07:30:14 -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:52.080 07:30:14 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:52.080 07:30:14 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:52.080 07:30:14 -- nvmf/common.sh@520 -- # config=() 00:20:52.080 07:30:14 -- nvmf/common.sh@520 -- # local subsystem config 00:20:52.080 07:30:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:52.080 07:30:14 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.080 07:30:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:52.080 { 00:20:52.080 "params": { 00:20:52.080 "name": "Nvme$subsystem", 00:20:52.080 "trtype": "$TEST_TRANSPORT", 00:20:52.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:52.080 "adrfam": "ipv4", 00:20:52.080 "trsvcid": "$NVMF_PORT", 00:20:52.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:52.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:52.080 "hdgst": ${hdgst:-false}, 00:20:52.080 "ddgst": ${ddgst:-false} 00:20:52.080 }, 00:20:52.080 "method": "bdev_nvme_attach_controller" 00:20:52.080 } 00:20:52.080 EOF 00:20:52.080 )") 00:20:52.080 07:30:14 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.080 07:30:14 -- target/dif.sh@82 -- # gen_fio_conf 00:20:52.080 07:30:14 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:52.080 07:30:14 -- target/dif.sh@54 -- # local file 00:20:52.080 07:30:14 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.080 07:30:14 -- target/dif.sh@56 -- # cat 00:20:52.080 07:30:14 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:52.080 07:30:14 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.080 07:30:14 -- common/autotest_common.sh@1330 -- # shift 00:20:52.080 07:30:14 -- nvmf/common.sh@542 -- # cat 00:20:52.080 07:30:14 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:52.080 07:30:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.080 07:30:14 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:52.080 07:30:14 -- target/dif.sh@72 -- # (( file <= files )) 00:20:52.080 07:30:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.080 07:30:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:52.080 07:30:14 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:52.080 07:30:14 -- nvmf/common.sh@544 -- # jq . 00:20:52.080 07:30:14 -- nvmf/common.sh@545 -- # IFS=, 00:20:52.080 07:30:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:52.080 "params": { 00:20:52.080 "name": "Nvme0", 00:20:52.080 "trtype": "tcp", 00:20:52.080 "traddr": "10.0.0.2", 00:20:52.080 "adrfam": "ipv4", 00:20:52.080 "trsvcid": "4420", 00:20:52.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:52.080 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:52.080 "hdgst": false, 00:20:52.080 "ddgst": false 00:20:52.080 }, 00:20:52.080 "method": "bdev_nvme_attach_controller" 00:20:52.080 }' 00:20:52.080 07:30:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:52.080 07:30:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:52.080 07:30:14 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.080 07:30:14 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.080 07:30:14 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:52.080 07:30:14 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:52.080 07:30:14 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:52.080 07:30:14 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:52.080 07:30:14 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:52.080 07:30:14 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:52.339 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:52.339 ... 00:20:52.339 fio-3.35 00:20:52.339 Starting 3 threads 00:20:52.598 [2024-11-28 07:30:14.856641] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:52.598 [2024-11-28 07:30:14.856728] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:57.875 00:20:57.875 filename0: (groupid=0, jobs=1): err= 0: pid=87420: Thu Nov 28 07:30:19 2024 00:20:57.875 read: IOPS=308, BW=38.5MiB/s (40.4MB/s)(193MiB/5001msec) 00:20:57.875 slat (nsec): min=5839, max=70016, avg=19861.54, stdev=11125.38 00:20:57.875 clat (usec): min=8450, max=18324, avg=9685.05, stdev=907.67 00:20:57.875 lat (usec): min=8459, max=18340, avg=9704.91, stdev=908.04 00:20:57.875 clat percentiles (usec): 00:20:57.875 | 1.00th=[ 9110], 5.00th=[ 9241], 10.00th=[ 9241], 20.00th=[ 9241], 00:20:57.875 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:20:57.875 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10028], 95.00th=[10552], 00:20:57.875 | 99.00th=[14615], 99.50th=[14746], 99.90th=[18220], 99.95th=[18220], 00:20:57.875 | 99.99th=[18220] 00:20:57.875 bw ( KiB/s): min=36096, max=41472, per=33.37%, avg=39490.89, stdev=2016.62, samples=9 00:20:57.875 iops : min= 282, max= 324, avg=308.44, stdev=15.68, samples=9 00:20:57.875 lat (msec) : 10=87.61%, 20=12.39% 00:20:57.875 cpu : usr=95.46%, sys=4.04%, ctx=6, majf=0, minf=0 00:20:57.875 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.875 issued rwts: total=1542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.875 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:57.875 filename0: (groupid=0, jobs=1): err= 0: pid=87421: Thu Nov 28 07:30:19 2024 00:20:57.875 read: IOPS=308, BW=38.6MiB/s (40.4MB/s)(193MiB/5007msec) 00:20:57.875 slat (nsec): min=6054, max=70485, avg=24456.43, stdev=12297.63 00:20:57.875 clat (usec): min=6878, max=18270, avg=9667.14, stdev=919.18 00:20:57.875 lat (usec): min=6901, max=18298, avg=9691.59, stdev=919.51 00:20:57.875 clat percentiles (usec): 00:20:57.875 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[ 9241], 00:20:57.875 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:20:57.875 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10028], 95.00th=[10552], 00:20:57.875 | 99.00th=[14615], 99.50th=[14746], 99.90th=[18220], 99.95th=[18220], 00:20:57.875 | 99.99th=[18220] 00:20:57.875 bw ( KiB/s): min=36096, max=41472, per=33.35%, avg=39465.90, stdev=1893.08, samples=10 00:20:57.875 iops : min= 282, max= 324, avg=308.20, stdev=14.80, samples=10 00:20:57.875 lat (msec) : 10=87.77%, 20=12.23% 00:20:57.875 cpu : usr=95.03%, sys=4.49%, ctx=6, majf=0, minf=0 00:20:57.875 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.875 issued rwts: total=1545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.875 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:57.875 filename0: (groupid=0, jobs=1): err= 0: pid=87422: Thu Nov 28 07:30:19 2024 00:20:57.875 read: IOPS=307, BW=38.5MiB/s (40.4MB/s)(193MiB/5007msec) 00:20:57.875 slat (nsec): min=5886, max=69847, avg=23661.45, stdev=11909.46 00:20:57.875 clat (usec): min=6880, max=18281, avg=9685.98, stdev=953.37 00:20:57.875 lat (usec): min=6904, max=18312, avg=9709.64, stdev=953.38 00:20:57.875 clat percentiles (usec): 00:20:57.875 | 1.00th=[ 9110], 5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[ 9241], 00:20:57.875 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:20:57.875 | 70.00th=[ 9765], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10552], 00:20:57.875 | 99.00th=[14615], 99.50th=[14877], 99.90th=[18220], 99.95th=[18220], 00:20:57.875 | 99.99th=[18220] 00:20:57.875 bw ( KiB/s): min=36096, max=41472, per=33.29%, avg=39389.00, stdev=2018.68, samples=10 00:20:57.875 iops : min= 282, max= 324, avg=307.60, stdev=15.80, samples=10 00:20:57.875 lat (msec) : 10=87.74%, 20=12.26% 00:20:57.875 cpu : usr=95.03%, sys=4.45%, ctx=10, majf=0, minf=0 00:20:57.875 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:57.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:57.875 issued rwts: total=1542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:57.875 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:57.875 00:20:57.875 Run status group 0 (all jobs): 00:20:57.875 READ: bw=116MiB/s (121MB/s), 38.5MiB/s-38.6MiB/s (40.4MB/s-40.4MB/s), io=579MiB (607MB), run=5001-5007msec 00:20:58.135 07:30:20 -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:58.135 07:30:20 -- target/dif.sh@43 -- # local sub 00:20:58.135 07:30:20 -- target/dif.sh@45 -- # for sub in "$@" 00:20:58.135 07:30:20 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:58.135 07:30:20 -- target/dif.sh@36 -- # local sub_id=0 00:20:58.135 07:30:20 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@109 -- # NULL_DIF=2 00:20:58.135 07:30:20 -- target/dif.sh@109 -- # bs=4k 00:20:58.135 07:30:20 -- target/dif.sh@109 -- # numjobs=8 00:20:58.135 07:30:20 -- target/dif.sh@109 -- # iodepth=16 00:20:58.135 07:30:20 -- target/dif.sh@109 -- # runtime= 00:20:58.135 07:30:20 -- target/dif.sh@109 -- # files=2 00:20:58.135 07:30:20 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:58.135 07:30:20 -- target/dif.sh@28 -- # local sub 00:20:58.135 07:30:20 -- target/dif.sh@30 -- # for sub in "$@" 00:20:58.135 07:30:20 -- target/dif.sh@31 -- # create_subsystem 0 00:20:58.135 07:30:20 -- target/dif.sh@18 -- # local sub_id=0 00:20:58.135 07:30:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 bdev_null0 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 [2024-11-28 07:30:20.243956] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@30 -- # for sub in "$@" 00:20:58.135 07:30:20 -- target/dif.sh@31 -- # create_subsystem 1 00:20:58.135 07:30:20 -- target/dif.sh@18 -- # local sub_id=1 00:20:58.135 07:30:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 bdev_null1 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@30 -- # for sub in "$@" 00:20:58.135 07:30:20 -- target/dif.sh@31 -- # create_subsystem 2 00:20:58.135 07:30:20 -- target/dif.sh@18 -- # local sub_id=2 00:20:58.135 07:30:20 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 bdev_null2 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:58.135 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.135 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.135 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.135 07:30:20 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:58.136 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.136 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.136 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.136 07:30:20 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:58.136 07:30:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.136 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:58.136 07:30:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.136 07:30:20 -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:58.136 07:30:20 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:58.136 07:30:20 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:58.136 07:30:20 -- nvmf/common.sh@520 -- # config=() 00:20:58.136 07:30:20 -- nvmf/common.sh@520 -- # local subsystem config 00:20:58.136 07:30:20 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:58.136 07:30:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:58.136 07:30:20 -- target/dif.sh@82 -- # gen_fio_conf 00:20:58.136 07:30:20 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:58.136 07:30:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:58.136 { 00:20:58.136 "params": { 00:20:58.136 "name": "Nvme$subsystem", 00:20:58.136 "trtype": "$TEST_TRANSPORT", 00:20:58.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.136 "adrfam": "ipv4", 00:20:58.136 "trsvcid": "$NVMF_PORT", 00:20:58.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.136 "hdgst": ${hdgst:-false}, 00:20:58.136 "ddgst": ${ddgst:-false} 00:20:58.136 }, 00:20:58.136 "method": "bdev_nvme_attach_controller" 00:20:58.136 } 00:20:58.136 EOF 00:20:58.136 )") 00:20:58.136 07:30:20 -- target/dif.sh@54 -- # local file 00:20:58.136 07:30:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:58.136 07:30:20 -- target/dif.sh@56 -- # cat 00:20:58.136 07:30:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:58.136 07:30:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:58.136 07:30:20 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:58.136 07:30:20 -- common/autotest_common.sh@1330 -- # shift 00:20:58.136 07:30:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:58.136 07:30:20 -- nvmf/common.sh@542 -- # cat 00:20:58.136 07:30:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.136 07:30:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:58.136 07:30:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:58.136 07:30:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:58.136 07:30:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:58.136 07:30:20 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:58.136 07:30:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:58.136 { 00:20:58.136 "params": { 00:20:58.136 "name": "Nvme$subsystem", 00:20:58.136 "trtype": "$TEST_TRANSPORT", 00:20:58.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.136 "adrfam": "ipv4", 00:20:58.136 "trsvcid": "$NVMF_PORT", 00:20:58.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.136 "hdgst": ${hdgst:-false}, 00:20:58.136 "ddgst": ${ddgst:-false} 00:20:58.136 }, 00:20:58.136 "method": "bdev_nvme_attach_controller" 00:20:58.136 } 00:20:58.136 EOF 00:20:58.136 )") 00:20:58.136 07:30:20 -- target/dif.sh@72 -- # (( file <= files )) 00:20:58.136 07:30:20 -- target/dif.sh@73 -- # cat 00:20:58.136 07:30:20 -- nvmf/common.sh@542 -- # cat 00:20:58.136 07:30:20 -- target/dif.sh@72 -- # (( file++ )) 00:20:58.136 07:30:20 -- target/dif.sh@72 -- # (( file <= files )) 00:20:58.136 07:30:20 -- target/dif.sh@73 -- # cat 00:20:58.136 07:30:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:58.136 07:30:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:58.136 { 00:20:58.136 "params": { 00:20:58.136 "name": "Nvme$subsystem", 00:20:58.136 "trtype": "$TEST_TRANSPORT", 00:20:58.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.136 "adrfam": "ipv4", 00:20:58.136 "trsvcid": "$NVMF_PORT", 00:20:58.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.136 "hdgst": ${hdgst:-false}, 00:20:58.136 "ddgst": ${ddgst:-false} 00:20:58.136 }, 00:20:58.136 "method": "bdev_nvme_attach_controller" 00:20:58.136 } 00:20:58.136 EOF 00:20:58.136 )") 00:20:58.136 07:30:20 -- nvmf/common.sh@542 -- # cat 00:20:58.136 07:30:20 -- target/dif.sh@72 -- # (( file++ )) 00:20:58.136 07:30:20 -- target/dif.sh@72 -- # (( file <= files )) 00:20:58.136 07:30:20 -- nvmf/common.sh@544 -- # jq . 00:20:58.136 07:30:20 -- nvmf/common.sh@545 -- # IFS=, 00:20:58.136 07:30:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:58.136 "params": { 00:20:58.136 "name": "Nvme0", 00:20:58.136 "trtype": "tcp", 00:20:58.136 "traddr": "10.0.0.2", 00:20:58.136 "adrfam": "ipv4", 00:20:58.136 "trsvcid": "4420", 00:20:58.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:58.136 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:58.136 "hdgst": false, 00:20:58.136 "ddgst": false 00:20:58.136 }, 00:20:58.136 "method": "bdev_nvme_attach_controller" 00:20:58.136 },{ 00:20:58.136 "params": { 00:20:58.136 "name": "Nvme1", 00:20:58.136 "trtype": "tcp", 00:20:58.136 "traddr": "10.0.0.2", 00:20:58.136 "adrfam": "ipv4", 00:20:58.136 "trsvcid": "4420", 00:20:58.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.136 "hdgst": false, 00:20:58.136 "ddgst": false 00:20:58.136 }, 00:20:58.136 "method": "bdev_nvme_attach_controller" 00:20:58.136 },{ 00:20:58.136 "params": { 00:20:58.136 "name": "Nvme2", 00:20:58.136 "trtype": "tcp", 00:20:58.136 "traddr": "10.0.0.2", 00:20:58.136 "adrfam": "ipv4", 00:20:58.136 "trsvcid": "4420", 00:20:58.136 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:58.136 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:58.136 "hdgst": false, 00:20:58.136 "ddgst": false 00:20:58.136 }, 00:20:58.136 "method": "bdev_nvme_attach_controller" 00:20:58.136 }' 00:20:58.136 07:30:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:58.136 07:30:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:58.136 07:30:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.136 07:30:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:58.136 07:30:20 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:58.136 07:30:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:58.136 07:30:20 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:58.136 07:30:20 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:58.136 07:30:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:58.136 07:30:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:58.395 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:58.395 ... 00:20:58.395 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:58.395 ... 00:20:58.395 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:58.395 ... 00:20:58.395 fio-3.35 00:20:58.395 Starting 24 threads 00:20:58.964 [2024-11-28 07:30:21.039107] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:58.964 [2024-11-28 07:30:21.039164] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:11.177 00:21:11.177 filename0: (groupid=0, jobs=1): err= 0: pid=87520: Thu Nov 28 07:30:31 2024 00:21:11.177 read: IOPS=260, BW=1040KiB/s (1065kB/s)(10.2MiB/10030msec) 00:21:11.177 slat (usec): min=4, max=8034, avg=37.23, stdev=359.69 00:21:11.177 clat (msec): min=13, max=125, avg=61.29, stdev=16.43 00:21:11.177 lat (msec): min=13, max=125, avg=61.33, stdev=16.43 00:21:11.177 clat percentiles (msec): 00:21:11.177 | 1.00th=[ 16], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:21:11.177 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 63], 00:21:11.177 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 93], 00:21:11.177 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 121], 99.95th=[ 121], 00:21:11.178 | 99.99th=[ 126] 00:21:11.178 bw ( KiB/s): min= 784, max= 1232, per=4.03%, avg=1038.90, stdev=117.81, samples=20 00:21:11.178 iops : min= 196, max= 308, avg=259.70, stdev=29.42, samples=20 00:21:11.178 lat (msec) : 20=1.07%, 50=23.00%, 100=74.63%, 250=1.30% 00:21:11.178 cpu : usr=36.85%, sys=1.27%, ctx=1071, majf=0, minf=9 00:21:11.178 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=80.4%, 16=17.0%, 32=0.0%, >=64=0.0% 00:21:11.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 complete : 0=0.0%, 4=88.5%, 8=11.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 issued rwts: total=2609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.178 filename0: (groupid=0, jobs=1): err= 0: pid=87521: Thu Nov 28 07:30:31 2024 00:21:11.178 read: IOPS=275, BW=1100KiB/s (1127kB/s)(10.8MiB/10009msec) 00:21:11.178 slat (usec): min=4, max=8022, avg=25.16, stdev=170.64 00:21:11.178 clat (msec): min=7, max=118, avg=58.02, stdev=17.36 00:21:11.178 lat (msec): min=7, max=118, avg=58.04, stdev=17.35 00:21:11.178 clat percentiles (msec): 00:21:11.178 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 45], 00:21:11.178 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 61], 00:21:11.178 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 84], 95.00th=[ 93], 00:21:11.178 | 99.00th=[ 105], 99.50th=[ 109], 99.90th=[ 116], 99.95th=[ 116], 00:21:11.178 | 99.99th=[ 120] 00:21:11.178 bw ( KiB/s): min= 792, max= 1248, per=4.19%, avg=1081.79, stdev=129.80, samples=19 00:21:11.178 iops : min= 198, max= 312, avg=270.42, stdev=32.47, samples=19 00:21:11.178 lat (msec) : 10=0.80%, 50=36.14%, 100=61.17%, 250=1.89% 00:21:11.178 cpu : usr=37.30%, sys=1.15%, ctx=1009, majf=0, minf=9 00:21:11.178 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:11.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 issued rwts: total=2753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.178 filename0: (groupid=0, jobs=1): err= 0: pid=87522: Thu Nov 28 07:30:31 2024 00:21:11.178 read: IOPS=264, BW=1057KiB/s (1082kB/s)(10.3MiB/10024msec) 00:21:11.178 slat (usec): min=3, max=8045, avg=38.69, stdev=381.10 00:21:11.178 clat (msec): min=21, max=118, avg=60.40, stdev=15.99 00:21:11.178 lat (msec): min=21, max=118, avg=60.44, stdev=15.99 00:21:11.178 clat percentiles (msec): 00:21:11.178 | 1.00th=[ 27], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:21:11.178 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 61], 00:21:11.178 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 92], 00:21:11.178 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 110], 99.95th=[ 110], 00:21:11.178 | 99.99th=[ 120] 00:21:11.178 bw ( KiB/s): min= 760, max= 1296, per=4.08%, avg=1052.80, stdev=123.37, samples=20 00:21:11.178 iops : min= 190, max= 324, avg=263.20, stdev=30.84, samples=20 00:21:11.178 lat (msec) : 50=28.25%, 100=70.09%, 250=1.66% 00:21:11.178 cpu : usr=32.16%, sys=1.08%, ctx=851, majf=0, minf=9 00:21:11.178 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=81.2%, 16=17.0%, 32=0.0%, >=64=0.0% 00:21:11.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 complete : 0=0.0%, 4=88.3%, 8=11.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 issued rwts: total=2648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.178 filename0: (groupid=0, jobs=1): err= 0: pid=87523: Thu Nov 28 07:30:31 2024 00:21:11.178 read: IOPS=272, BW=1089KiB/s (1116kB/s)(10.6MiB/10001msec) 00:21:11.178 slat (usec): min=6, max=12025, avg=43.37, stdev=434.25 00:21:11.178 clat (msec): min=8, max=126, avg=58.57, stdev=17.97 00:21:11.178 lat (msec): min=8, max=126, avg=58.61, stdev=17.97 00:21:11.178 clat percentiles (msec): 00:21:11.178 | 1.00th=[ 27], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 43], 00:21:11.178 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 61], 00:21:11.178 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 94], 00:21:11.178 | 99.00th=[ 107], 99.50th=[ 111], 99.90th=[ 120], 99.95th=[ 127], 00:21:11.178 | 99.99th=[ 128] 00:21:11.178 bw ( KiB/s): min= 752, max= 1280, per=4.16%, avg=1072.42, stdev=149.61, samples=19 00:21:11.178 iops : min= 188, max= 320, avg=268.11, stdev=37.40, samples=19 00:21:11.178 lat (msec) : 10=0.26%, 20=0.22%, 50=35.32%, 100=61.82%, 250=2.39% 00:21:11.178 cpu : usr=34.43%, sys=1.09%, ctx=993, majf=0, minf=9 00:21:11.178 IO depths : 1=0.1%, 2=0.1%, 4=0.7%, 8=82.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:11.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 issued rwts: total=2724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.178 filename0: (groupid=0, jobs=1): err= 0: pid=87524: Thu Nov 28 07:30:31 2024 00:21:11.178 read: IOPS=257, BW=1032KiB/s (1057kB/s)(10.1MiB/10006msec) 00:21:11.178 slat (usec): min=4, max=8047, avg=39.10, stdev=369.43 00:21:11.178 clat (msec): min=24, max=132, avg=61.82, stdev=19.60 00:21:11.178 lat (msec): min=24, max=132, avg=61.86, stdev=19.59 00:21:11.178 clat percentiles (msec): 00:21:11.178 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:21:11.178 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 62], 00:21:11.178 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 89], 95.00th=[ 107], 00:21:11.178 | 99.00th=[ 122], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 133], 00:21:11.178 | 99.99th=[ 133] 00:21:11.178 bw ( KiB/s): min= 544, max= 1248, per=3.95%, avg=1018.95, stdev=186.12, samples=19 00:21:11.178 iops : min= 136, max= 312, avg=254.74, stdev=46.53, samples=19 00:21:11.178 lat (msec) : 50=29.72%, 100=64.12%, 250=6.16% 00:21:11.178 cpu : usr=36.96%, sys=1.30%, ctx=1045, majf=0, minf=9 00:21:11.178 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:21:11.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 complete : 0=0.0%, 4=89.0%, 8=9.7%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 issued rwts: total=2581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.178 filename0: (groupid=0, jobs=1): err= 0: pid=87525: Thu Nov 28 07:30:31 2024 00:21:11.178 read: IOPS=266, BW=1067KiB/s (1092kB/s)(10.5MiB/10053msec) 00:21:11.178 slat (usec): min=3, max=4021, avg=20.64, stdev=128.06 00:21:11.178 clat (msec): min=15, max=120, avg=59.79, stdev=16.32 00:21:11.178 lat (msec): min=15, max=120, avg=59.82, stdev=16.32 00:21:11.178 clat percentiles (msec): 00:21:11.178 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:21:11.178 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:21:11.178 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 91], 00:21:11.178 | 99.00th=[ 103], 99.50th=[ 105], 99.90th=[ 120], 99.95th=[ 121], 00:21:11.178 | 99.99th=[ 121] 00:21:11.178 bw ( KiB/s): min= 760, max= 1264, per=4.13%, avg=1066.00, stdev=141.97, samples=20 00:21:11.178 iops : min= 190, max= 316, avg=266.50, stdev=35.49, samples=20 00:21:11.178 lat (msec) : 20=0.63%, 50=25.96%, 100=72.14%, 250=1.27% 00:21:11.178 cpu : usr=43.68%, sys=1.31%, ctx=1424, majf=0, minf=9 00:21:11.178 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.1%, 16=16.8%, 32=0.0%, >=64=0.0% 00:21:11.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 complete : 0=0.0%, 4=88.2%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 issued rwts: total=2681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.178 filename0: (groupid=0, jobs=1): err= 0: pid=87526: Thu Nov 28 07:30:31 2024 00:21:11.178 read: IOPS=267, BW=1069KiB/s (1095kB/s)(10.5MiB/10035msec) 00:21:11.178 slat (usec): min=4, max=8025, avg=24.93, stdev=181.21 00:21:11.178 clat (msec): min=7, max=119, avg=59.72, stdev=17.84 00:21:11.178 lat (msec): min=7, max=119, avg=59.75, stdev=17.84 00:21:11.178 clat percentiles (msec): 00:21:11.178 | 1.00th=[ 15], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 47], 00:21:11.178 | 30.00th=[ 51], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:21:11.178 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 95], 00:21:11.178 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 118], 99.95th=[ 118], 00:21:11.178 | 99.99th=[ 121] 00:21:11.178 bw ( KiB/s): min= 736, max= 1592, per=4.13%, avg=1066.70, stdev=186.30, samples=20 00:21:11.178 iops : min= 184, max= 398, avg=266.65, stdev=46.62, samples=20 00:21:11.178 lat (msec) : 10=0.30%, 20=1.30%, 50=27.88%, 100=68.47%, 250=2.05% 00:21:11.178 cpu : usr=39.16%, sys=1.39%, ctx=1365, majf=0, minf=9 00:21:11.178 IO depths : 1=0.1%, 2=0.1%, 4=0.6%, 8=82.1%, 16=17.2%, 32=0.0%, >=64=0.0% 00:21:11.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 complete : 0=0.0%, 4=88.1%, 8=11.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.178 issued rwts: total=2683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.178 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.178 filename0: (groupid=0, jobs=1): err= 0: pid=87527: Thu Nov 28 07:30:31 2024 00:21:11.178 read: IOPS=275, BW=1100KiB/s (1127kB/s)(10.8MiB/10004msec) 00:21:11.178 slat (usec): min=6, max=8037, avg=32.36, stdev=274.75 00:21:11.178 clat (msec): min=6, max=142, avg=58.02, stdev=18.21 00:21:11.178 lat (msec): min=6, max=142, avg=58.06, stdev=18.21 00:21:11.178 clat percentiles (msec): 00:21:11.178 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 43], 00:21:11.178 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 61], 00:21:11.178 | 70.00th=[ 64], 80.00th=[ 71], 90.00th=[ 85], 95.00th=[ 93], 00:21:11.178 | 99.00th=[ 107], 99.50th=[ 126], 99.90th=[ 127], 99.95th=[ 142], 00:21:11.178 | 99.99th=[ 144] 00:21:11.178 bw ( KiB/s): min= 784, max= 1224, per=4.17%, avg=1074.11, stdev=139.33, samples=19 00:21:11.178 iops : min= 196, max= 306, avg=268.53, stdev=34.83, samples=19 00:21:11.178 lat (msec) : 10=1.05%, 20=0.33%, 50=33.68%, 100=62.79%, 250=2.14% 00:21:11.179 cpu : usr=40.71%, sys=1.36%, ctx=1358, majf=0, minf=9 00:21:11.179 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:11.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 issued rwts: total=2752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.179 filename1: (groupid=0, jobs=1): err= 0: pid=87528: Thu Nov 28 07:30:31 2024 00:21:11.179 read: IOPS=265, BW=1063KiB/s (1088kB/s)(10.4MiB/10016msec) 00:21:11.179 slat (usec): min=4, max=8032, avg=34.76, stdev=285.40 00:21:11.179 clat (msec): min=26, max=122, avg=60.07, stdev=16.42 00:21:11.179 lat (msec): min=26, max=122, avg=60.11, stdev=16.43 00:21:11.179 clat percentiles (msec): 00:21:11.179 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 46], 00:21:11.179 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 62], 00:21:11.179 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 93], 00:21:11.179 | 99.00th=[ 104], 99.50th=[ 110], 99.90th=[ 116], 99.95th=[ 123], 00:21:11.179 | 99.99th=[ 123] 00:21:11.179 bw ( KiB/s): min= 768, max= 1232, per=4.10%, avg=1058.00, stdev=126.27, samples=20 00:21:11.179 iops : min= 192, max= 308, avg=264.50, stdev=31.57, samples=20 00:21:11.179 lat (msec) : 50=28.49%, 100=69.64%, 250=1.88% 00:21:11.179 cpu : usr=40.39%, sys=1.44%, ctx=1433, majf=0, minf=9 00:21:11.179 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=79.5%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:11.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 complete : 0=0.0%, 4=88.4%, 8=10.9%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 issued rwts: total=2661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.179 filename1: (groupid=0, jobs=1): err= 0: pid=87529: Thu Nov 28 07:30:31 2024 00:21:11.179 read: IOPS=263, BW=1052KiB/s (1077kB/s)(10.3MiB/10021msec) 00:21:11.179 slat (usec): min=4, max=8030, avg=37.27, stdev=324.36 00:21:11.179 clat (msec): min=26, max=113, avg=60.67, stdev=16.23 00:21:11.179 lat (msec): min=26, max=113, avg=60.71, stdev=16.23 00:21:11.179 clat percentiles (msec): 00:21:11.179 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:21:11.179 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 62], 00:21:11.179 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 92], 00:21:11.179 | 99.00th=[ 104], 99.50th=[ 108], 99.90th=[ 114], 99.95th=[ 114], 00:21:11.179 | 99.99th=[ 114] 00:21:11.179 bw ( KiB/s): min= 736, max= 1168, per=4.06%, avg=1047.60, stdev=119.36, samples=20 00:21:11.179 iops : min= 184, max= 292, avg=261.85, stdev=29.81, samples=20 00:21:11.179 lat (msec) : 50=28.79%, 100=69.54%, 250=1.67% 00:21:11.179 cpu : usr=38.20%, sys=1.26%, ctx=1197, majf=0, minf=9 00:21:11.179 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=80.7%, 16=16.8%, 32=0.0%, >=64=0.0% 00:21:11.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 issued rwts: total=2636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.179 filename1: (groupid=0, jobs=1): err= 0: pid=87530: Thu Nov 28 07:30:31 2024 00:21:11.179 read: IOPS=267, BW=1070KiB/s (1096kB/s)(10.5MiB/10007msec) 00:21:11.179 slat (usec): min=4, max=7012, avg=34.68, stdev=266.93 00:21:11.179 clat (msec): min=8, max=133, avg=59.66, stdev=19.34 00:21:11.179 lat (msec): min=8, max=133, avg=59.70, stdev=19.34 00:21:11.179 clat percentiles (msec): 00:21:11.179 | 1.00th=[ 22], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 44], 00:21:11.179 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 00:21:11.179 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 88], 95.00th=[ 96], 00:21:11.179 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 133], 99.95th=[ 134], 00:21:11.179 | 99.99th=[ 134] 00:21:11.179 bw ( KiB/s): min= 624, max= 1280, per=4.07%, avg=1049.37, stdev=187.77, samples=19 00:21:11.179 iops : min= 156, max= 320, avg=262.32, stdev=46.94, samples=19 00:21:11.179 lat (msec) : 10=0.86%, 50=32.57%, 100=62.23%, 250=4.33% 00:21:11.179 cpu : usr=43.03%, sys=1.66%, ctx=1217, majf=0, minf=9 00:21:11.179 IO depths : 1=0.1%, 2=1.5%, 4=6.2%, 8=76.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:11.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 complete : 0=0.0%, 4=88.9%, 8=9.7%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 issued rwts: total=2677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.179 filename1: (groupid=0, jobs=1): err= 0: pid=87531: Thu Nov 28 07:30:31 2024 00:21:11.179 read: IOPS=271, BW=1085KiB/s (1111kB/s)(10.6MiB/10039msec) 00:21:11.179 slat (usec): min=4, max=5039, avg=36.25, stdev=270.33 00:21:11.179 clat (usec): min=1575, max=120013, avg=58730.91, stdev=18586.80 00:21:11.179 lat (usec): min=1585, max=120021, avg=58767.16, stdev=18593.51 00:21:11.179 clat percentiles (msec): 00:21:11.179 | 1.00th=[ 4], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 45], 00:21:11.179 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 63], 00:21:11.179 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 91], 00:21:11.179 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 107], 99.95th=[ 113], 00:21:11.179 | 99.99th=[ 121] 00:21:11.179 bw ( KiB/s): min= 784, max= 1776, per=4.21%, avg=1085.60, stdev=201.10, samples=20 00:21:11.179 iops : min= 196, max= 444, avg=271.40, stdev=50.28, samples=20 00:21:11.179 lat (msec) : 2=0.59%, 4=1.18%, 10=0.59%, 20=1.18%, 50=24.05% 00:21:11.179 lat (msec) : 100=70.95%, 250=1.47% 00:21:11.179 cpu : usr=44.43%, sys=1.29%, ctx=1521, majf=0, minf=9 00:21:11.179 IO depths : 1=0.1%, 2=1.2%, 4=4.5%, 8=78.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:21:11.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 complete : 0=0.0%, 4=88.8%, 8=10.2%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 issued rwts: total=2723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.179 filename1: (groupid=0, jobs=1): err= 0: pid=87532: Thu Nov 28 07:30:31 2024 00:21:11.179 read: IOPS=273, BW=1093KiB/s (1120kB/s)(10.7MiB/10025msec) 00:21:11.179 slat (usec): min=4, max=3043, avg=23.65, stdev=93.15 00:21:11.179 clat (msec): min=16, max=116, avg=58.42, stdev=16.50 00:21:11.179 lat (msec): min=16, max=116, avg=58.44, stdev=16.50 00:21:11.179 clat percentiles (msec): 00:21:11.179 | 1.00th=[ 25], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 44], 00:21:11.179 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 58], 60.00th=[ 61], 00:21:11.179 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 89], 00:21:11.179 | 99.00th=[ 105], 99.50th=[ 106], 99.90th=[ 110], 99.95th=[ 114], 00:21:11.179 | 99.99th=[ 117] 00:21:11.179 bw ( KiB/s): min= 736, max= 1392, per=4.22%, avg=1089.60, stdev=150.87, samples=20 00:21:11.179 iops : min= 184, max= 348, avg=272.40, stdev=37.72, samples=20 00:21:11.179 lat (msec) : 20=0.22%, 50=31.61%, 100=66.72%, 250=1.46% 00:21:11.179 cpu : usr=48.52%, sys=1.67%, ctx=1634, majf=0, minf=9 00:21:11.179 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=82.7%, 16=16.8%, 32=0.0%, >=64=0.0% 00:21:11.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 issued rwts: total=2740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.179 filename1: (groupid=0, jobs=1): err= 0: pid=87533: Thu Nov 28 07:30:31 2024 00:21:11.179 read: IOPS=262, BW=1050KiB/s (1075kB/s)(10.3MiB/10039msec) 00:21:11.179 slat (usec): min=4, max=9019, avg=41.11, stdev=447.59 00:21:11.179 clat (msec): min=10, max=121, avg=60.73, stdev=16.72 00:21:11.179 lat (msec): min=10, max=121, avg=60.77, stdev=16.73 00:21:11.179 clat percentiles (msec): 00:21:11.179 | 1.00th=[ 14], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 48], 00:21:11.179 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 61], 00:21:11.179 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 92], 00:21:11.179 | 99.00th=[ 105], 99.50th=[ 107], 99.90th=[ 109], 99.95th=[ 121], 00:21:11.179 | 99.99th=[ 123] 00:21:11.179 bw ( KiB/s): min= 736, max= 1410, per=4.07%, avg=1049.70, stdev=154.84, samples=20 00:21:11.179 iops : min= 184, max= 352, avg=262.40, stdev=38.65, samples=20 00:21:11.179 lat (msec) : 20=1.82%, 50=26.01%, 100=70.96%, 250=1.21% 00:21:11.179 cpu : usr=33.11%, sys=1.17%, ctx=931, majf=0, minf=9 00:21:11.179 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=80.3%, 16=17.2%, 32=0.0%, >=64=0.0% 00:21:11.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 complete : 0=0.0%, 4=88.6%, 8=11.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 issued rwts: total=2634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.179 filename1: (groupid=0, jobs=1): err= 0: pid=87534: Thu Nov 28 07:30:31 2024 00:21:11.179 read: IOPS=271, BW=1085KiB/s (1111kB/s)(10.6MiB/10011msec) 00:21:11.179 slat (usec): min=6, max=8045, avg=35.32, stdev=307.75 00:21:11.179 clat (msec): min=11, max=118, avg=58.85, stdev=16.52 00:21:11.179 lat (msec): min=11, max=118, avg=58.89, stdev=16.52 00:21:11.179 clat percentiles (msec): 00:21:11.179 | 1.00th=[ 30], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 46], 00:21:11.179 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:21:11.179 | 70.00th=[ 64], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 93], 00:21:11.179 | 99.00th=[ 100], 99.50th=[ 105], 99.90th=[ 110], 99.95th=[ 110], 00:21:11.179 | 99.99th=[ 120] 00:21:11.179 bw ( KiB/s): min= 784, max= 1272, per=4.15%, avg=1069.05, stdev=130.97, samples=19 00:21:11.179 iops : min= 196, max= 318, avg=267.26, stdev=32.74, samples=19 00:21:11.179 lat (msec) : 20=0.26%, 50=33.19%, 100=65.71%, 250=0.85% 00:21:11.179 cpu : usr=34.39%, sys=1.19%, ctx=1018, majf=0, minf=9 00:21:11.179 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=82.2%, 16=16.6%, 32=0.0%, >=64=0.0% 00:21:11.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.179 issued rwts: total=2715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.179 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.179 filename1: (groupid=0, jobs=1): err= 0: pid=87535: Thu Nov 28 07:30:31 2024 00:21:11.179 read: IOPS=258, BW=1034KiB/s (1059kB/s)(10.1MiB/10025msec) 00:21:11.179 slat (usec): min=4, max=9022, avg=27.43, stdev=257.41 00:21:11.180 clat (msec): min=26, max=123, avg=61.77, stdev=16.80 00:21:11.180 lat (msec): min=26, max=123, avg=61.79, stdev=16.80 00:21:11.180 clat percentiles (msec): 00:21:11.180 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 47], 00:21:11.180 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 64], 00:21:11.180 | 70.00th=[ 67], 80.00th=[ 74], 90.00th=[ 85], 95.00th=[ 95], 00:21:11.180 | 99.00th=[ 108], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:21:11.180 | 99.99th=[ 125] 00:21:11.180 bw ( KiB/s): min= 736, max= 1208, per=4.00%, avg=1030.30, stdev=150.46, samples=20 00:21:11.180 iops : min= 184, max= 302, avg=257.55, stdev=37.62, samples=20 00:21:11.180 lat (msec) : 50=24.65%, 100=73.69%, 250=1.66% 00:21:11.180 cpu : usr=42.12%, sys=1.17%, ctx=1359, majf=0, minf=9 00:21:11.180 IO depths : 1=0.1%, 2=1.1%, 4=4.5%, 8=78.1%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:11.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 complete : 0=0.0%, 4=89.0%, 8=10.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 issued rwts: total=2592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.180 filename2: (groupid=0, jobs=1): err= 0: pid=87536: Thu Nov 28 07:30:31 2024 00:21:11.180 read: IOPS=282, BW=1130KiB/s (1158kB/s)(11.0MiB/10003msec) 00:21:11.180 slat (usec): min=3, max=8044, avg=25.65, stdev=213.46 00:21:11.180 clat (msec): min=6, max=125, avg=56.51, stdev=17.81 00:21:11.180 lat (msec): min=6, max=125, avg=56.53, stdev=17.81 00:21:11.180 clat percentiles (msec): 00:21:11.180 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 40], 00:21:11.180 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 61], 00:21:11.180 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 84], 95.00th=[ 92], 00:21:11.180 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 110], 99.95th=[ 126], 00:21:11.180 | 99.99th=[ 126] 00:21:11.180 bw ( KiB/s): min= 816, max= 1312, per=4.28%, avg=1104.11, stdev=135.06, samples=19 00:21:11.180 iops : min= 204, max= 328, avg=276.00, stdev=33.78, samples=19 00:21:11.180 lat (msec) : 10=1.06%, 20=0.28%, 50=39.48%, 100=57.55%, 250=1.63% 00:21:11.180 cpu : usr=32.23%, sys=1.05%, ctx=857, majf=0, minf=9 00:21:11.180 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.1%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:11.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 issued rwts: total=2827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.180 filename2: (groupid=0, jobs=1): err= 0: pid=87537: Thu Nov 28 07:30:31 2024 00:21:11.180 read: IOPS=271, BW=1086KiB/s (1112kB/s)(10.6MiB/10005msec) 00:21:11.180 slat (usec): min=4, max=8055, avg=38.02, stdev=290.37 00:21:11.180 clat (msec): min=24, max=121, avg=58.77, stdev=16.26 00:21:11.180 lat (msec): min=24, max=121, avg=58.81, stdev=16.25 00:21:11.180 clat percentiles (msec): 00:21:11.180 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 39], 20.00th=[ 45], 00:21:11.180 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 61], 00:21:11.180 | 70.00th=[ 64], 80.00th=[ 69], 90.00th=[ 84], 95.00th=[ 91], 00:21:11.180 | 99.00th=[ 107], 99.50th=[ 109], 99.90th=[ 117], 99.95th=[ 122], 00:21:11.180 | 99.99th=[ 123] 00:21:11.180 bw ( KiB/s): min= 816, max= 1232, per=4.17%, avg=1075.79, stdev=122.56, samples=19 00:21:11.180 iops : min= 204, max= 308, avg=268.95, stdev=30.64, samples=19 00:21:11.180 lat (msec) : 50=33.42%, 100=65.03%, 250=1.55% 00:21:11.180 cpu : usr=41.10%, sys=1.21%, ctx=1311, majf=0, minf=9 00:21:11.180 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:21:11.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 issued rwts: total=2717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.180 filename2: (groupid=0, jobs=1): err= 0: pid=87538: Thu Nov 28 07:30:31 2024 00:21:11.180 read: IOPS=262, BW=1050KiB/s (1075kB/s)(10.3MiB/10018msec) 00:21:11.180 slat (usec): min=4, max=11015, avg=33.88, stdev=329.82 00:21:11.180 clat (msec): min=27, max=117, avg=60.79, stdev=15.71 00:21:11.180 lat (msec): min=27, max=117, avg=60.82, stdev=15.71 00:21:11.180 clat percentiles (msec): 00:21:11.180 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 48], 00:21:11.180 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 61], 00:21:11.180 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 92], 00:21:11.180 | 99.00th=[ 106], 99.50th=[ 107], 99.90th=[ 112], 99.95th=[ 114], 00:21:11.180 | 99.99th=[ 118] 00:21:11.180 bw ( KiB/s): min= 768, max= 1200, per=4.05%, avg=1045.60, stdev=130.14, samples=20 00:21:11.180 iops : min= 192, max= 300, avg=261.40, stdev=32.54, samples=20 00:21:11.180 lat (msec) : 50=26.92%, 100=71.67%, 250=1.41% 00:21:11.180 cpu : usr=36.92%, sys=1.13%, ctx=1190, majf=0, minf=9 00:21:11.180 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.1%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:11.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 issued rwts: total=2630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.180 filename2: (groupid=0, jobs=1): err= 0: pid=87539: Thu Nov 28 07:30:31 2024 00:21:11.180 read: IOPS=269, BW=1077KiB/s (1103kB/s)(10.6MiB/10045msec) 00:21:11.180 slat (usec): min=3, max=8032, avg=28.73, stdev=306.92 00:21:11.180 clat (usec): min=1481, max=119953, avg=59213.26, stdev=19550.53 00:21:11.180 lat (usec): min=1487, max=119983, avg=59241.99, stdev=19552.33 00:21:11.180 clat percentiles (usec): 00:21:11.180 | 1.00th=[ 1827], 5.00th=[ 24773], 10.00th=[ 35914], 20.00th=[ 47973], 00:21:11.180 | 30.00th=[ 50070], 40.00th=[ 58459], 50.00th=[ 60031], 60.00th=[ 60556], 00:21:11.180 | 70.00th=[ 68682], 80.00th=[ 71828], 90.00th=[ 84411], 95.00th=[ 90702], 00:21:11.180 | 99.00th=[105382], 99.50th=[107480], 99.90th=[110625], 99.95th=[112722], 00:21:11.180 | 99.99th=[120062] 00:21:11.180 bw ( KiB/s): min= 736, max= 1920, per=4.17%, avg=1075.60, stdev=237.24, samples=20 00:21:11.180 iops : min= 184, max= 480, avg=268.90, stdev=59.31, samples=20 00:21:11.180 lat (msec) : 2=1.18%, 4=1.18%, 10=1.33%, 20=1.04%, 50=25.10% 00:21:11.180 lat (msec) : 100=68.91%, 250=1.26% 00:21:11.180 cpu : usr=32.96%, sys=1.13%, ctx=870, majf=0, minf=0 00:21:11.180 IO depths : 1=0.2%, 2=0.7%, 4=2.1%, 8=80.0%, 16=16.9%, 32=0.0%, >=64=0.0% 00:21:11.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 complete : 0=0.0%, 4=88.6%, 8=10.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 issued rwts: total=2705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.180 filename2: (groupid=0, jobs=1): err= 0: pid=87540: Thu Nov 28 07:30:31 2024 00:21:11.180 read: IOPS=267, BW=1070KiB/s (1095kB/s)(10.5MiB/10025msec) 00:21:11.180 slat (usec): min=5, max=8026, avg=36.86, stdev=329.94 00:21:11.180 clat (msec): min=24, max=113, avg=59.61, stdev=15.73 00:21:11.180 lat (msec): min=24, max=113, avg=59.64, stdev=15.74 00:21:11.180 clat percentiles (msec): 00:21:11.180 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 47], 00:21:11.180 | 30.00th=[ 52], 40.00th=[ 56], 50.00th=[ 59], 60.00th=[ 62], 00:21:11.180 | 70.00th=[ 65], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 91], 00:21:11.180 | 99.00th=[ 100], 99.50th=[ 105], 99.90th=[ 107], 99.95th=[ 110], 00:21:11.180 | 99.99th=[ 113] 00:21:11.180 bw ( KiB/s): min= 792, max= 1168, per=4.14%, avg=1068.30, stdev=119.55, samples=20 00:21:11.180 iops : min= 198, max= 292, avg=267.05, stdev=29.87, samples=20 00:21:11.180 lat (msec) : 50=27.23%, 100=71.80%, 250=0.97% 00:21:11.180 cpu : usr=43.68%, sys=1.40%, ctx=1201, majf=0, minf=9 00:21:11.180 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=80.3%, 16=16.5%, 32=0.0%, >=64=0.0% 00:21:11.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 issued rwts: total=2681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.180 filename2: (groupid=0, jobs=1): err= 0: pid=87541: Thu Nov 28 07:30:31 2024 00:21:11.180 read: IOPS=300, BW=1203KiB/s (1232kB/s)(11.7MiB/10001msec) 00:21:11.180 slat (usec): min=6, max=8044, avg=26.40, stdev=231.30 00:21:11.180 clat (usec): min=852, max=125099, avg=53097.04, stdev=24109.30 00:21:11.180 lat (usec): min=859, max=125129, avg=53123.44, stdev=24109.41 00:21:11.180 clat percentiles (usec): 00:21:11.180 | 1.00th=[ 1029], 5.00th=[ 1221], 10.00th=[ 7832], 20.00th=[ 36439], 00:21:11.180 | 30.00th=[ 45876], 40.00th=[ 48497], 50.00th=[ 57410], 60.00th=[ 60031], 00:21:11.180 | 70.00th=[ 61604], 80.00th=[ 70779], 90.00th=[ 83362], 95.00th=[ 94897], 00:21:11.180 | 99.00th=[107480], 99.50th=[108528], 99.90th=[120062], 99.95th=[125305], 00:21:11.180 | 99.99th=[125305] 00:21:11.180 bw ( KiB/s): min= 824, max= 1256, per=4.13%, avg=1065.68, stdev=152.01, samples=19 00:21:11.180 iops : min= 206, max= 314, avg=266.42, stdev=38.00, samples=19 00:21:11.180 lat (usec) : 1000=0.90% 00:21:11.180 lat (msec) : 2=7.68%, 4=0.73%, 10=1.30%, 50=30.70%, 100=56.67% 00:21:11.180 lat (msec) : 250=2.03% 00:21:11.180 cpu : usr=32.88%, sys=1.17%, ctx=903, majf=0, minf=9 00:21:11.180 IO depths : 1=0.1%, 2=0.8%, 4=4.2%, 8=79.5%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:11.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 complete : 0=0.0%, 4=88.3%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.180 issued rwts: total=3007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.180 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.180 filename2: (groupid=0, jobs=1): err= 0: pid=87542: Thu Nov 28 07:30:31 2024 00:21:11.180 read: IOPS=278, BW=1113KiB/s (1139kB/s)(10.9MiB/10004msec) 00:21:11.180 slat (usec): min=4, max=9021, avg=42.65, stdev=413.02 00:21:11.180 clat (msec): min=2, max=137, avg=57.30, stdev=17.56 00:21:11.180 lat (msec): min=2, max=137, avg=57.34, stdev=17.57 00:21:11.180 clat percentiles (msec): 00:21:11.180 | 1.00th=[ 10], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 45], 00:21:11.180 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 61], 00:21:11.180 | 70.00th=[ 62], 80.00th=[ 71], 90.00th=[ 84], 95.00th=[ 91], 00:21:11.180 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 122], 99.95th=[ 129], 00:21:11.180 | 99.99th=[ 138] 00:21:11.180 bw ( KiB/s): min= 768, max= 1328, per=4.24%, avg=1093.47, stdev=128.56, samples=19 00:21:11.180 iops : min= 192, max= 332, avg=273.37, stdev=32.14, samples=19 00:21:11.181 lat (msec) : 4=0.11%, 10=0.93%, 20=0.11%, 50=37.37%, 100=59.94% 00:21:11.181 lat (msec) : 250=1.55% 00:21:11.181 cpu : usr=32.16%, sys=1.20%, ctx=842, majf=0, minf=9 00:21:11.181 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.0%, 16=16.4%, 32=0.0%, >=64=0.0% 00:21:11.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.181 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.181 issued rwts: total=2783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.181 filename2: (groupid=0, jobs=1): err= 0: pid=87543: Thu Nov 28 07:30:31 2024 00:21:11.181 read: IOPS=263, BW=1052KiB/s (1077kB/s)(10.3MiB/10029msec) 00:21:11.181 slat (usec): min=3, max=8032, avg=34.44, stdev=341.30 00:21:11.181 clat (msec): min=13, max=129, avg=60.64, stdev=16.44 00:21:11.181 lat (msec): min=13, max=129, avg=60.68, stdev=16.44 00:21:11.181 clat percentiles (msec): 00:21:11.181 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:21:11.181 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 61], 00:21:11.181 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 94], 00:21:11.181 | 99.00th=[ 104], 99.50th=[ 107], 99.90th=[ 121], 99.95th=[ 121], 00:21:11.181 | 99.99th=[ 130] 00:21:11.181 bw ( KiB/s): min= 756, max= 1351, per=4.07%, avg=1050.55, stdev=137.76, samples=20 00:21:11.181 iops : min= 189, max= 337, avg=262.60, stdev=34.35, samples=20 00:21:11.181 lat (msec) : 20=0.76%, 50=26.23%, 100=71.68%, 250=1.33% 00:21:11.181 cpu : usr=32.11%, sys=1.10%, ctx=851, majf=0, minf=9 00:21:11.181 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=81.7%, 16=17.4%, 32=0.0%, >=64=0.0% 00:21:11.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.181 complete : 0=0.0%, 4=88.3%, 8=11.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.181 issued rwts: total=2638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.181 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:11.181 00:21:11.181 Run status group 0 (all jobs): 00:21:11.181 READ: bw=25.2MiB/s (26.4MB/s), 1032KiB/s-1203KiB/s (1057kB/s-1232kB/s), io=253MiB (265MB), run=10001-10053msec 00:21:11.181 07:30:31 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:11.181 07:30:31 -- target/dif.sh@43 -- # local sub 00:21:11.181 07:30:31 -- target/dif.sh@45 -- # for sub in "$@" 00:21:11.181 07:30:31 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:11.181 07:30:31 -- target/dif.sh@36 -- # local sub_id=0 00:21:11.181 07:30:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@45 -- # for sub in "$@" 00:21:11.181 07:30:31 -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:11.181 07:30:31 -- target/dif.sh@36 -- # local sub_id=1 00:21:11.181 07:30:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@45 -- # for sub in "$@" 00:21:11.181 07:30:31 -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:11.181 07:30:31 -- target/dif.sh@36 -- # local sub_id=2 00:21:11.181 07:30:31 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@115 -- # NULL_DIF=1 00:21:11.181 07:30:31 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:11.181 07:30:31 -- target/dif.sh@115 -- # numjobs=2 00:21:11.181 07:30:31 -- target/dif.sh@115 -- # iodepth=8 00:21:11.181 07:30:31 -- target/dif.sh@115 -- # runtime=5 00:21:11.181 07:30:31 -- target/dif.sh@115 -- # files=1 00:21:11.181 07:30:31 -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:11.181 07:30:31 -- target/dif.sh@28 -- # local sub 00:21:11.181 07:30:31 -- target/dif.sh@30 -- # for sub in "$@" 00:21:11.181 07:30:31 -- target/dif.sh@31 -- # create_subsystem 0 00:21:11.181 07:30:31 -- target/dif.sh@18 -- # local sub_id=0 00:21:11.181 07:30:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 bdev_null0 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 [2024-11-28 07:30:31.599270] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@30 -- # for sub in "$@" 00:21:11.181 07:30:31 -- target/dif.sh@31 -- # create_subsystem 1 00:21:11.181 07:30:31 -- target/dif.sh@18 -- # local sub_id=1 00:21:11.181 07:30:31 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 bdev_null1 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.181 07:30:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.181 07:30:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.181 07:30:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.181 07:30:31 -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:11.181 07:30:31 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:11.181 07:30:31 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:11.181 07:30:31 -- nvmf/common.sh@520 -- # config=() 00:21:11.181 07:30:31 -- nvmf/common.sh@520 -- # local subsystem config 00:21:11.181 07:30:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:11.181 07:30:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:11.181 { 00:21:11.181 "params": { 00:21:11.181 "name": "Nvme$subsystem", 00:21:11.181 "trtype": "$TEST_TRANSPORT", 00:21:11.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.181 "adrfam": "ipv4", 00:21:11.181 "trsvcid": "$NVMF_PORT", 00:21:11.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.181 "hdgst": ${hdgst:-false}, 00:21:11.181 "ddgst": ${ddgst:-false} 00:21:11.181 }, 00:21:11.181 "method": "bdev_nvme_attach_controller" 00:21:11.181 } 00:21:11.181 EOF 00:21:11.181 )") 00:21:11.181 07:30:31 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:11.181 07:30:31 -- target/dif.sh@82 -- # gen_fio_conf 00:21:11.181 07:30:31 -- target/dif.sh@54 -- # local file 00:21:11.181 07:30:31 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:11.181 07:30:31 -- target/dif.sh@56 -- # cat 00:21:11.181 07:30:31 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:11.181 07:30:31 -- nvmf/common.sh@542 -- # cat 00:21:11.181 07:30:31 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:11.181 07:30:31 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:11.181 07:30:31 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:11.181 07:30:31 -- common/autotest_common.sh@1330 -- # shift 00:21:11.181 07:30:31 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:11.181 07:30:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:11.181 07:30:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:11.181 07:30:31 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:11.181 07:30:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:11.181 { 00:21:11.181 "params": { 00:21:11.181 "name": "Nvme$subsystem", 00:21:11.181 "trtype": "$TEST_TRANSPORT", 00:21:11.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:11.181 "adrfam": "ipv4", 00:21:11.181 "trsvcid": "$NVMF_PORT", 00:21:11.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:11.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:11.181 "hdgst": ${hdgst:-false}, 00:21:11.182 "ddgst": ${ddgst:-false} 00:21:11.182 }, 00:21:11.182 "method": "bdev_nvme_attach_controller" 00:21:11.182 } 00:21:11.182 EOF 00:21:11.182 )") 00:21:11.182 07:30:31 -- target/dif.sh@72 -- # (( file <= files )) 00:21:11.182 07:30:31 -- target/dif.sh@73 -- # cat 00:21:11.182 07:30:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:11.182 07:30:31 -- nvmf/common.sh@542 -- # cat 00:21:11.182 07:30:31 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:11.182 07:30:31 -- target/dif.sh@72 -- # (( file++ )) 00:21:11.182 07:30:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:11.182 07:30:31 -- target/dif.sh@72 -- # (( file <= files )) 00:21:11.182 07:30:31 -- nvmf/common.sh@544 -- # jq . 00:21:11.182 07:30:31 -- nvmf/common.sh@545 -- # IFS=, 00:21:11.182 07:30:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:11.182 "params": { 00:21:11.182 "name": "Nvme0", 00:21:11.182 "trtype": "tcp", 00:21:11.182 "traddr": "10.0.0.2", 00:21:11.182 "adrfam": "ipv4", 00:21:11.182 "trsvcid": "4420", 00:21:11.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:11.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:11.182 "hdgst": false, 00:21:11.182 "ddgst": false 00:21:11.182 }, 00:21:11.182 "method": "bdev_nvme_attach_controller" 00:21:11.182 },{ 00:21:11.182 "params": { 00:21:11.182 "name": "Nvme1", 00:21:11.182 "trtype": "tcp", 00:21:11.182 "traddr": "10.0.0.2", 00:21:11.182 "adrfam": "ipv4", 00:21:11.182 "trsvcid": "4420", 00:21:11.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:11.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:11.182 "hdgst": false, 00:21:11.182 "ddgst": false 00:21:11.182 }, 00:21:11.182 "method": "bdev_nvme_attach_controller" 00:21:11.182 }' 00:21:11.182 07:30:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:11.182 07:30:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:11.182 07:30:31 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:11.182 07:30:31 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:11.182 07:30:31 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:11.182 07:30:31 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:11.182 07:30:31 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:11.182 07:30:31 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:11.182 07:30:31 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:11.182 07:30:31 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:11.182 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:11.182 ... 00:21:11.182 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:11.182 ... 00:21:11.182 fio-3.35 00:21:11.182 Starting 4 threads 00:21:11.182 [2024-11-28 07:30:32.241938] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:11.182 [2024-11-28 07:30:32.241991] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:15.374 00:21:15.374 filename0: (groupid=0, jobs=1): err= 0: pid=87684: Thu Nov 28 07:30:37 2024 00:21:15.374 read: IOPS=2381, BW=18.6MiB/s (19.5MB/s)(93.1MiB/5001msec) 00:21:15.374 slat (usec): min=3, max=3093, avg=21.61, stdev=30.29 00:21:15.374 clat (usec): min=553, max=10451, avg=3277.38, stdev=770.02 00:21:15.374 lat (usec): min=564, max=10466, avg=3298.99, stdev=772.11 00:21:15.374 clat percentiles (usec): 00:21:15.374 | 1.00th=[ 1467], 5.00th=[ 1778], 10.00th=[ 2057], 20.00th=[ 2802], 00:21:15.374 | 30.00th=[ 3097], 40.00th=[ 3261], 50.00th=[ 3392], 60.00th=[ 3589], 00:21:15.374 | 70.00th=[ 3720], 80.00th=[ 3851], 90.00th=[ 4047], 95.00th=[ 4228], 00:21:15.374 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 6128], 99.95th=[10421], 00:21:15.374 | 99.99th=[10421] 00:21:15.374 bw ( KiB/s): min=17872, max=20288, per=24.76%, avg=19274.67, stdev=987.96, samples=9 00:21:15.374 iops : min= 2234, max= 2536, avg=2409.33, stdev=123.49, samples=9 00:21:15.374 lat (usec) : 750=0.01%, 1000=0.17% 00:21:15.374 lat (msec) : 2=8.94%, 4=78.22%, 10=12.60%, 20=0.07% 00:21:15.374 cpu : usr=93.96%, sys=4.78%, ctx=560, majf=0, minf=9 00:21:15.374 IO depths : 1=3.5%, 2=14.3%, 4=56.2%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.374 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.374 issued rwts: total=11912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.374 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:15.374 filename0: (groupid=0, jobs=1): err= 0: pid=87685: Thu Nov 28 07:30:37 2024 00:21:15.374 read: IOPS=2325, BW=18.2MiB/s (19.1MB/s)(90.9MiB/5001msec) 00:21:15.374 slat (usec): min=6, max=511, avg=20.66, stdev=12.42 00:21:15.374 clat (usec): min=744, max=6670, avg=3363.71, stdev=752.28 00:21:15.374 lat (usec): min=754, max=6694, avg=3384.37, stdev=753.76 00:21:15.374 clat percentiles (usec): 00:21:15.374 | 1.00th=[ 1254], 5.00th=[ 1795], 10.00th=[ 2114], 20.00th=[ 2999], 00:21:15.374 | 30.00th=[ 3195], 40.00th=[ 3359], 50.00th=[ 3490], 60.00th=[ 3654], 00:21:15.374 | 70.00th=[ 3785], 80.00th=[ 3916], 90.00th=[ 4080], 95.00th=[ 4293], 00:21:15.374 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[ 5800], 99.95th=[ 5932], 00:21:15.374 | 99.99th=[ 6456] 00:21:15.374 bw ( KiB/s): min=16992, max=19760, per=23.54%, avg=18330.67, stdev=883.30, samples=9 00:21:15.374 iops : min= 2124, max= 2470, avg=2291.33, stdev=110.41, samples=9 00:21:15.374 lat (usec) : 750=0.01%, 1000=0.38% 00:21:15.374 lat (msec) : 2=7.66%, 4=76.68%, 10=15.28% 00:21:15.374 cpu : usr=94.18%, sys=4.68%, ctx=71, majf=0, minf=9 00:21:15.374 IO depths : 1=3.8%, 2=15.3%, 4=55.6%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.374 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.374 issued rwts: total=11632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.374 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:15.374 filename1: (groupid=0, jobs=1): err= 0: pid=87686: Thu Nov 28 07:30:37 2024 00:21:15.374 read: IOPS=2420, BW=18.9MiB/s (19.8MB/s)(94.6MiB/5001msec) 00:21:15.374 slat (nsec): min=5782, max=93920, avg=21232.68, stdev=11088.82 00:21:15.374 clat (usec): min=346, max=8978, avg=3227.69, stdev=792.37 00:21:15.374 lat (usec): min=359, max=9001, avg=3248.92, stdev=794.20 00:21:15.374 clat percentiles (usec): 00:21:15.374 | 1.00th=[ 1319], 5.00th=[ 1713], 10.00th=[ 1909], 20.00th=[ 2606], 00:21:15.374 | 30.00th=[ 3064], 40.00th=[ 3195], 50.00th=[ 3359], 60.00th=[ 3523], 00:21:15.374 | 70.00th=[ 3687], 80.00th=[ 3818], 90.00th=[ 4047], 95.00th=[ 4228], 00:21:15.374 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5866], 99.95th=[ 8979], 00:21:15.374 | 99.99th=[ 8979] 00:21:15.374 bw ( KiB/s): min=18752, max=20976, per=25.35%, avg=19735.78, stdev=698.58, samples=9 00:21:15.374 iops : min= 2344, max= 2622, avg=2466.89, stdev=87.42, samples=9 00:21:15.374 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.40% 00:21:15.374 lat (msec) : 2=10.50%, 4=77.59%, 10=11.49% 00:21:15.374 cpu : usr=95.44%, sys=3.74%, ctx=4, majf=0, minf=9 00:21:15.374 IO depths : 1=3.3%, 2=13.1%, 4=56.7%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.374 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.374 issued rwts: total=12107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.374 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:15.374 filename1: (groupid=0, jobs=1): err= 0: pid=87687: Thu Nov 28 07:30:37 2024 00:21:15.374 read: IOPS=2604, BW=20.4MiB/s (21.3MB/s)(102MiB/5002msec) 00:21:15.374 slat (nsec): min=4669, max=96340, avg=17802.22, stdev=10791.37 00:21:15.374 clat (usec): min=282, max=6376, avg=3012.75, stdev=873.61 00:21:15.374 lat (usec): min=295, max=6403, avg=3030.56, stdev=876.16 00:21:15.374 clat percentiles (usec): 00:21:15.374 | 1.00th=[ 930], 5.00th=[ 1139], 10.00th=[ 1729], 20.00th=[ 2147], 00:21:15.374 | 30.00th=[ 2900], 40.00th=[ 3064], 50.00th=[ 3228], 60.00th=[ 3359], 00:21:15.374 | 70.00th=[ 3556], 80.00th=[ 3720], 90.00th=[ 3949], 95.00th=[ 4080], 00:21:15.374 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 5211], 99.95th=[ 5735], 00:21:15.374 | 99.99th=[ 5932] 00:21:15.374 bw ( KiB/s): min=19504, max=22704, per=26.40%, avg=20551.11, stdev=1041.54, samples=9 00:21:15.374 iops : min= 2438, max= 2838, avg=2568.89, stdev=130.19, samples=9 00:21:15.374 lat (usec) : 500=0.20%, 750=0.27%, 1000=1.29% 00:21:15.374 lat (msec) : 2=15.52%, 4=75.23%, 10=7.50% 00:21:15.374 cpu : usr=95.20%, sys=3.94%, ctx=9, majf=0, minf=0 00:21:15.374 IO depths : 1=2.2%, 2=9.5%, 4=58.8%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.374 complete : 0=0.0%, 4=96.0%, 8=4.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.374 issued rwts: total=13030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.374 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:15.374 00:21:15.374 Run status group 0 (all jobs): 00:21:15.374 READ: bw=76.0MiB/s (79.7MB/s), 18.2MiB/s-20.4MiB/s (19.1MB/s-21.3MB/s), io=380MiB (399MB), run=5001-5002msec 00:21:15.374 07:30:37 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:15.374 07:30:37 -- target/dif.sh@43 -- # local sub 00:21:15.374 07:30:37 -- target/dif.sh@45 -- # for sub in "$@" 00:21:15.374 07:30:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:15.374 07:30:37 -- target/dif.sh@36 -- # local sub_id=0 00:21:15.374 07:30:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:15.374 07:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.374 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:21:15.374 07:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.374 07:30:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:15.374 07:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.374 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:21:15.374 07:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.374 07:30:37 -- target/dif.sh@45 -- # for sub in "$@" 00:21:15.374 07:30:37 -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:15.374 07:30:37 -- target/dif.sh@36 -- # local sub_id=1 00:21:15.374 07:30:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:15.374 07:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.374 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:21:15.374 07:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.374 07:30:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:15.374 07:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.374 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:21:15.374 07:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.374 ************************************ 00:21:15.374 END TEST fio_dif_rand_params 00:21:15.374 ************************************ 00:21:15.374 00:21:15.374 real 0m23.386s 00:21:15.374 user 2m6.640s 00:21:15.374 sys 0m5.382s 00:21:15.374 07:30:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:15.374 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:21:15.634 07:30:37 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:15.634 07:30:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:15.634 07:30:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:15.634 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:21:15.634 ************************************ 00:21:15.634 START TEST fio_dif_digest 00:21:15.634 ************************************ 00:21:15.634 07:30:37 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:21:15.634 07:30:37 -- target/dif.sh@123 -- # local NULL_DIF 00:21:15.634 07:30:37 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:15.634 07:30:37 -- target/dif.sh@125 -- # local hdgst ddgst 00:21:15.634 07:30:37 -- target/dif.sh@127 -- # NULL_DIF=3 00:21:15.634 07:30:37 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:15.634 07:30:37 -- target/dif.sh@127 -- # numjobs=3 00:21:15.634 07:30:37 -- target/dif.sh@127 -- # iodepth=3 00:21:15.634 07:30:37 -- target/dif.sh@127 -- # runtime=10 00:21:15.634 07:30:37 -- target/dif.sh@128 -- # hdgst=true 00:21:15.634 07:30:37 -- target/dif.sh@128 -- # ddgst=true 00:21:15.634 07:30:37 -- target/dif.sh@130 -- # create_subsystems 0 00:21:15.634 07:30:37 -- target/dif.sh@28 -- # local sub 00:21:15.634 07:30:37 -- target/dif.sh@30 -- # for sub in "$@" 00:21:15.634 07:30:37 -- target/dif.sh@31 -- # create_subsystem 0 00:21:15.634 07:30:37 -- target/dif.sh@18 -- # local sub_id=0 00:21:15.634 07:30:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:15.634 07:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.634 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:21:15.634 bdev_null0 00:21:15.634 07:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.634 07:30:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:15.634 07:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.634 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:21:15.634 07:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.634 07:30:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:15.634 07:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.634 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:21:15.634 07:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.634 07:30:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:15.634 07:30:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.634 07:30:37 -- common/autotest_common.sh@10 -- # set +x 00:21:15.634 [2024-11-28 07:30:37.720597] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.634 07:30:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.634 07:30:37 -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:15.634 07:30:37 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:15.634 07:30:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:15.634 07:30:37 -- nvmf/common.sh@520 -- # config=() 00:21:15.634 07:30:37 -- nvmf/common.sh@520 -- # local subsystem config 00:21:15.634 07:30:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:15.634 07:30:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:15.634 { 00:21:15.634 "params": { 00:21:15.634 "name": "Nvme$subsystem", 00:21:15.634 "trtype": "$TEST_TRANSPORT", 00:21:15.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.634 "adrfam": "ipv4", 00:21:15.634 "trsvcid": "$NVMF_PORT", 00:21:15.634 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.634 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.634 "hdgst": ${hdgst:-false}, 00:21:15.634 "ddgst": ${ddgst:-false} 00:21:15.634 }, 00:21:15.634 "method": "bdev_nvme_attach_controller" 00:21:15.634 } 00:21:15.634 EOF 00:21:15.634 )") 00:21:15.634 07:30:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:15.634 07:30:37 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:15.634 07:30:37 -- target/dif.sh@82 -- # gen_fio_conf 00:21:15.634 07:30:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:21:15.634 07:30:37 -- target/dif.sh@54 -- # local file 00:21:15.634 07:30:37 -- target/dif.sh@56 -- # cat 00:21:15.634 07:30:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:15.634 07:30:37 -- nvmf/common.sh@542 -- # cat 00:21:15.634 07:30:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:21:15.634 07:30:37 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:15.634 07:30:37 -- common/autotest_common.sh@1330 -- # shift 00:21:15.634 07:30:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:21:15.634 07:30:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.634 07:30:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:21:15.634 07:30:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:15.634 07:30:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:15.634 07:30:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:21:15.634 07:30:37 -- target/dif.sh@72 -- # (( file <= files )) 00:21:15.634 07:30:37 -- nvmf/common.sh@544 -- # jq . 00:21:15.634 07:30:37 -- nvmf/common.sh@545 -- # IFS=, 00:21:15.634 07:30:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:15.634 "params": { 00:21:15.634 "name": "Nvme0", 00:21:15.634 "trtype": "tcp", 00:21:15.634 "traddr": "10.0.0.2", 00:21:15.634 "adrfam": "ipv4", 00:21:15.634 "trsvcid": "4420", 00:21:15.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:15.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:15.634 "hdgst": true, 00:21:15.634 "ddgst": true 00:21:15.634 }, 00:21:15.634 "method": "bdev_nvme_attach_controller" 00:21:15.634 }' 00:21:15.634 07:30:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:15.634 07:30:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:15.634 07:30:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.634 07:30:37 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:21:15.634 07:30:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:15.634 07:30:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:21:15.634 07:30:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:21:15.635 07:30:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:21:15.635 07:30:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:15.635 07:30:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:15.894 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:15.894 ... 00:21:15.894 fio-3.35 00:21:15.894 Starting 3 threads 00:21:16.154 [2024-11-28 07:30:38.287924] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:16.154 [2024-11-28 07:30:38.287983] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:28.376 00:21:28.376 filename0: (groupid=0, jobs=1): err= 0: pid=87798: Thu Nov 28 07:30:48 2024 00:21:28.376 read: IOPS=274, BW=34.4MiB/s (36.0MB/s)(344MiB/10005msec) 00:21:28.376 slat (nsec): min=6292, max=76261, avg=21951.40, stdev=11762.45 00:21:28.376 clat (usec): min=9839, max=41335, avg=10860.21, stdev=1882.36 00:21:28.376 lat (usec): min=9846, max=41364, avg=10882.16, stdev=1881.93 00:21:28.376 clat percentiles (usec): 00:21:28.376 | 1.00th=[10290], 5.00th=[10290], 10.00th=[10290], 20.00th=[10421], 00:21:28.376 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:21:28.376 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11076], 95.00th=[11338], 00:21:28.376 | 99.00th=[16712], 99.50th=[19268], 99.90th=[41157], 99.95th=[41157], 00:21:28.376 | 99.99th=[41157] 00:21:28.376 bw ( KiB/s): min=25344, max=36864, per=33.31%, avg=35174.40, stdev=2548.39, samples=20 00:21:28.376 iops : min= 198, max= 288, avg=274.80, stdev=19.91, samples=20 00:21:28.376 lat (msec) : 10=0.11%, 20=99.56%, 50=0.33% 00:21:28.376 cpu : usr=94.93%, sys=4.58%, ctx=113, majf=0, minf=9 00:21:28.376 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:28.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.376 issued rwts: total=2751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.376 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:28.376 filename0: (groupid=0, jobs=1): err= 0: pid=87799: Thu Nov 28 07:30:48 2024 00:21:28.376 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(344MiB/10009msec) 00:21:28.376 slat (nsec): min=6090, max=79151, avg=16439.28, stdev=9608.75 00:21:28.376 clat (usec): min=8279, max=40972, avg=10864.27, stdev=1883.62 00:21:28.376 lat (usec): min=8286, max=41002, avg=10880.71, stdev=1883.71 00:21:28.376 clat percentiles (usec): 00:21:28.376 | 1.00th=[10290], 5.00th=[10290], 10.00th=[10290], 20.00th=[10421], 00:21:28.376 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:21:28.376 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11076], 95.00th=[11338], 00:21:28.376 | 99.00th=[16712], 99.50th=[17433], 99.90th=[41157], 99.95th=[41157], 00:21:28.376 | 99.99th=[41157] 00:21:28.376 bw ( KiB/s): min=25344, max=36864, per=33.35%, avg=35212.80, stdev=2519.90, samples=20 00:21:28.376 iops : min= 198, max= 288, avg=275.10, stdev=19.69, samples=20 00:21:28.376 lat (msec) : 10=0.33%, 20=99.24%, 50=0.44% 00:21:28.376 cpu : usr=95.24%, sys=4.24%, ctx=17, majf=0, minf=0 00:21:28.376 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:28.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.376 issued rwts: total=2754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.376 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:28.376 filename0: (groupid=0, jobs=1): err= 0: pid=87800: Thu Nov 28 07:30:48 2024 00:21:28.376 read: IOPS=274, BW=34.4MiB/s (36.0MB/s)(344MiB/10004msec) 00:21:28.376 slat (usec): min=6, max=225, avg=21.93, stdev=13.43 00:21:28.376 clat (usec): min=10197, max=41335, avg=10857.93, stdev=1879.17 00:21:28.376 lat (usec): min=10211, max=41361, avg=10879.86, stdev=1878.71 00:21:28.376 clat percentiles (usec): 00:21:28.376 | 1.00th=[10290], 5.00th=[10290], 10.00th=[10290], 20.00th=[10421], 00:21:28.376 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:21:28.376 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11076], 95.00th=[11338], 00:21:28.376 | 99.00th=[16712], 99.50th=[19268], 99.90th=[41157], 99.95th=[41157], 00:21:28.376 | 99.99th=[41157] 00:21:28.376 bw ( KiB/s): min=25344, max=36864, per=33.31%, avg=35171.10, stdev=2539.16, samples=20 00:21:28.376 iops : min= 198, max= 288, avg=274.75, stdev=19.86, samples=20 00:21:28.376 lat (msec) : 20=99.67%, 50=0.33% 00:21:28.376 cpu : usr=94.79%, sys=4.43%, ctx=109, majf=0, minf=9 00:21:28.376 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:28.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.376 issued rwts: total=2751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.376 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:28.376 00:21:28.376 Run status group 0 (all jobs): 00:21:28.376 READ: bw=103MiB/s (108MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.1MB/s), io=1032MiB (1082MB), run=10004-10009msec 00:21:28.376 07:30:48 -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:28.376 07:30:48 -- target/dif.sh@43 -- # local sub 00:21:28.376 07:30:48 -- target/dif.sh@45 -- # for sub in "$@" 00:21:28.376 07:30:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:28.376 07:30:48 -- target/dif.sh@36 -- # local sub_id=0 00:21:28.376 07:30:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:28.376 07:30:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.376 07:30:48 -- common/autotest_common.sh@10 -- # set +x 00:21:28.376 07:30:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.376 07:30:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:28.376 07:30:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.376 07:30:48 -- common/autotest_common.sh@10 -- # set +x 00:21:28.376 ************************************ 00:21:28.376 END TEST fio_dif_digest 00:21:28.376 ************************************ 00:21:28.376 07:30:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.376 00:21:28.376 real 0m10.944s 00:21:28.376 user 0m29.071s 00:21:28.376 sys 0m1.604s 00:21:28.376 07:30:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:28.376 07:30:48 -- common/autotest_common.sh@10 -- # set +x 00:21:28.376 07:30:48 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:28.376 07:30:48 -- target/dif.sh@147 -- # nvmftestfini 00:21:28.376 07:30:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:28.376 07:30:48 -- nvmf/common.sh@116 -- # sync 00:21:28.376 07:30:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:28.376 07:30:48 -- nvmf/common.sh@119 -- # set +e 00:21:28.376 07:30:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:28.376 07:30:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:28.376 rmmod nvme_tcp 00:21:28.376 rmmod nvme_fabrics 00:21:28.376 rmmod nvme_keyring 00:21:28.376 07:30:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:28.376 07:30:48 -- nvmf/common.sh@123 -- # set -e 00:21:28.376 07:30:48 -- nvmf/common.sh@124 -- # return 0 00:21:28.376 07:30:48 -- nvmf/common.sh@477 -- # '[' -n 87036 ']' 00:21:28.376 07:30:48 -- nvmf/common.sh@478 -- # killprocess 87036 00:21:28.376 07:30:48 -- common/autotest_common.sh@936 -- # '[' -z 87036 ']' 00:21:28.376 07:30:48 -- common/autotest_common.sh@940 -- # kill -0 87036 00:21:28.376 07:30:48 -- common/autotest_common.sh@941 -- # uname 00:21:28.376 07:30:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:28.376 07:30:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87036 00:21:28.376 killing process with pid 87036 00:21:28.376 07:30:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:28.376 07:30:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:28.376 07:30:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87036' 00:21:28.376 07:30:48 -- common/autotest_common.sh@955 -- # kill 87036 00:21:28.376 07:30:48 -- common/autotest_common.sh@960 -- # wait 87036 00:21:28.376 07:30:48 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:21:28.376 07:30:48 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:28.376 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:28.376 Waiting for block devices as requested 00:21:28.376 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:21:28.376 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:21:28.376 07:30:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:28.376 07:30:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:28.376 07:30:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.376 07:30:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:28.376 07:30:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.376 07:30:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:28.376 07:30:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.377 07:30:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:28.377 00:21:28.377 real 0m59.536s 00:21:28.377 user 3m50.955s 00:21:28.377 sys 0m16.174s 00:21:28.377 07:30:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:28.377 07:30:49 -- common/autotest_common.sh@10 -- # set +x 00:21:28.377 ************************************ 00:21:28.377 END TEST nvmf_dif 00:21:28.377 ************************************ 00:21:28.377 07:30:49 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:28.377 07:30:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:28.377 07:30:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:28.377 07:30:49 -- common/autotest_common.sh@10 -- # set +x 00:21:28.377 ************************************ 00:21:28.377 START TEST nvmf_abort_qd_sizes 00:21:28.377 ************************************ 00:21:28.377 07:30:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:28.377 * Looking for test storage... 00:21:28.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:28.377 07:30:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:28.377 07:30:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:28.377 07:30:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:28.377 07:30:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:28.377 07:30:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:28.377 07:30:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:28.377 07:30:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:28.377 07:30:49 -- scripts/common.sh@335 -- # IFS=.-: 00:21:28.377 07:30:49 -- scripts/common.sh@335 -- # read -ra ver1 00:21:28.377 07:30:49 -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.377 07:30:49 -- scripts/common.sh@336 -- # read -ra ver2 00:21:28.377 07:30:49 -- scripts/common.sh@337 -- # local 'op=<' 00:21:28.377 07:30:49 -- scripts/common.sh@339 -- # ver1_l=2 00:21:28.377 07:30:49 -- scripts/common.sh@340 -- # ver2_l=1 00:21:28.377 07:30:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:28.377 07:30:49 -- scripts/common.sh@343 -- # case "$op" in 00:21:28.377 07:30:49 -- scripts/common.sh@344 -- # : 1 00:21:28.377 07:30:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:28.377 07:30:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.377 07:30:49 -- scripts/common.sh@364 -- # decimal 1 00:21:28.377 07:30:49 -- scripts/common.sh@352 -- # local d=1 00:21:28.377 07:30:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.377 07:30:49 -- scripts/common.sh@354 -- # echo 1 00:21:28.377 07:30:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:28.377 07:30:49 -- scripts/common.sh@365 -- # decimal 2 00:21:28.377 07:30:49 -- scripts/common.sh@352 -- # local d=2 00:21:28.377 07:30:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.377 07:30:49 -- scripts/common.sh@354 -- # echo 2 00:21:28.377 07:30:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:28.377 07:30:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:28.377 07:30:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:28.377 07:30:49 -- scripts/common.sh@367 -- # return 0 00:21:28.377 07:30:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.377 07:30:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:28.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.377 --rc genhtml_branch_coverage=1 00:21:28.377 --rc genhtml_function_coverage=1 00:21:28.377 --rc genhtml_legend=1 00:21:28.377 --rc geninfo_all_blocks=1 00:21:28.377 --rc geninfo_unexecuted_blocks=1 00:21:28.377 00:21:28.377 ' 00:21:28.377 07:30:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:28.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.377 --rc genhtml_branch_coverage=1 00:21:28.377 --rc genhtml_function_coverage=1 00:21:28.377 --rc genhtml_legend=1 00:21:28.377 --rc geninfo_all_blocks=1 00:21:28.377 --rc geninfo_unexecuted_blocks=1 00:21:28.377 00:21:28.377 ' 00:21:28.377 07:30:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:28.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.377 --rc genhtml_branch_coverage=1 00:21:28.377 --rc genhtml_function_coverage=1 00:21:28.377 --rc genhtml_legend=1 00:21:28.377 --rc geninfo_all_blocks=1 00:21:28.377 --rc geninfo_unexecuted_blocks=1 00:21:28.377 00:21:28.377 ' 00:21:28.377 07:30:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:28.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.377 --rc genhtml_branch_coverage=1 00:21:28.377 --rc genhtml_function_coverage=1 00:21:28.377 --rc genhtml_legend=1 00:21:28.377 --rc geninfo_all_blocks=1 00:21:28.377 --rc geninfo_unexecuted_blocks=1 00:21:28.377 00:21:28.377 ' 00:21:28.377 07:30:49 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:28.377 07:30:49 -- nvmf/common.sh@7 -- # uname -s 00:21:28.377 07:30:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.377 07:30:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.377 07:30:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.377 07:30:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.377 07:30:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.377 07:30:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.377 07:30:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.377 07:30:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.377 07:30:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.377 07:30:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.377 07:30:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f 00:21:28.377 07:30:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=42ff1b5c-407a-478a-8c45-326c3d19865f 00:21:28.377 07:30:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.377 07:30:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.377 07:30:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:28.377 07:30:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:28.377 07:30:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.377 07:30:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.377 07:30:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.377 07:30:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.377 07:30:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.377 07:30:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.377 07:30:49 -- paths/export.sh@5 -- # export PATH 00:21:28.377 07:30:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.377 07:30:49 -- nvmf/common.sh@46 -- # : 0 00:21:28.377 07:30:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:28.377 07:30:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:28.377 07:30:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:28.377 07:30:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.377 07:30:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.377 07:30:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:28.377 07:30:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:28.377 07:30:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:28.377 07:30:49 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:21:28.377 07:30:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:28.377 07:30:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.377 07:30:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:28.377 07:30:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:28.377 07:30:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:28.377 07:30:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.377 07:30:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:28.377 07:30:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.377 07:30:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:28.377 07:30:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:28.377 07:30:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:28.377 07:30:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:28.377 07:30:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:28.377 07:30:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:28.377 07:30:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.377 07:30:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.377 07:30:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:28.377 07:30:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:28.377 07:30:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:28.377 07:30:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:28.377 07:30:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:28.377 07:30:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.377 07:30:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:28.377 07:30:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:28.377 07:30:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:28.377 07:30:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:28.377 07:30:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:28.377 07:30:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:28.377 Cannot find device "nvmf_tgt_br" 00:21:28.377 07:30:49 -- nvmf/common.sh@154 -- # true 00:21:28.377 07:30:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:28.378 Cannot find device "nvmf_tgt_br2" 00:21:28.378 07:30:49 -- nvmf/common.sh@155 -- # true 00:21:28.378 07:30:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:28.378 07:30:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:28.378 Cannot find device "nvmf_tgt_br" 00:21:28.378 07:30:49 -- nvmf/common.sh@157 -- # true 00:21:28.378 07:30:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:28.378 Cannot find device "nvmf_tgt_br2" 00:21:28.378 07:30:49 -- nvmf/common.sh@158 -- # true 00:21:28.378 07:30:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:28.378 07:30:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:28.378 07:30:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:28.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.378 07:30:50 -- nvmf/common.sh@161 -- # true 00:21:28.378 07:30:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:28.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.378 07:30:50 -- nvmf/common.sh@162 -- # true 00:21:28.378 07:30:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:28.378 07:30:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:28.378 07:30:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:28.378 07:30:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:28.378 07:30:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:28.378 07:30:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:28.378 07:30:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:28.378 07:30:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:28.378 07:30:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:28.378 07:30:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:28.378 07:30:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:28.378 07:30:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:28.378 07:30:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:28.378 07:30:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:28.378 07:30:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:28.378 07:30:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:28.378 07:30:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:28.378 07:30:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:28.378 07:30:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:28.378 07:30:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:28.378 07:30:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:28.378 07:30:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:28.378 07:30:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:28.378 07:30:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:28.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:21:28.378 00:21:28.378 --- 10.0.0.2 ping statistics --- 00:21:28.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.378 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:28.378 07:30:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:28.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:28.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:21:28.378 00:21:28.378 --- 10.0.0.3 ping statistics --- 00:21:28.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.378 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:28.378 07:30:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:28.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:28.378 00:21:28.378 --- 10.0.0.1 ping statistics --- 00:21:28.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.378 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:28.378 07:30:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.378 07:30:50 -- nvmf/common.sh@421 -- # return 0 00:21:28.378 07:30:50 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:21:28.378 07:30:50 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:28.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:28.959 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:21:28.959 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:21:28.959 07:30:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.959 07:30:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:28.959 07:30:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:28.959 07:30:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.959 07:30:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:28.959 07:30:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:28.959 07:30:51 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:21:28.959 07:30:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:28.959 07:30:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:28.959 07:30:51 -- common/autotest_common.sh@10 -- # set +x 00:21:28.959 07:30:51 -- nvmf/common.sh@469 -- # nvmfpid=88398 00:21:28.959 07:30:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:28.959 07:30:51 -- nvmf/common.sh@470 -- # waitforlisten 88398 00:21:28.959 07:30:51 -- common/autotest_common.sh@829 -- # '[' -z 88398 ']' 00:21:28.959 07:30:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.959 07:30:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.959 07:30:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.959 07:30:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.959 07:30:51 -- common/autotest_common.sh@10 -- # set +x 00:21:29.217 [2024-11-28 07:30:51.242424] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:29.217 [2024-11-28 07:30:51.242685] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.217 [2024-11-28 07:30:51.385685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.218 [2024-11-28 07:30:51.469181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:29.218 [2024-11-28 07:30:51.469565] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.218 [2024-11-28 07:30:51.469694] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.218 [2024-11-28 07:30:51.469789] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.218 [2024-11-28 07:30:51.470040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.218 [2024-11-28 07:30:51.471442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.218 [2024-11-28 07:30:51.471540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.218 [2024-11-28 07:30:51.471745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.155 07:30:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.155 07:30:52 -- common/autotest_common.sh@862 -- # return 0 00:21:30.155 07:30:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:30.155 07:30:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.155 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:21:30.155 07:30:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.155 07:30:52 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:30.155 07:30:52 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:21:30.155 07:30:52 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:21:30.155 07:30:52 -- scripts/common.sh@311 -- # local bdf bdfs 00:21:30.155 07:30:52 -- scripts/common.sh@312 -- # local nvmes 00:21:30.155 07:30:52 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:21:30.155 07:30:52 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:30.155 07:30:52 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:21:30.155 07:30:52 -- scripts/common.sh@297 -- # local bdf= 00:21:30.155 07:30:52 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:21:30.155 07:30:52 -- scripts/common.sh@232 -- # local class 00:21:30.155 07:30:52 -- scripts/common.sh@233 -- # local subclass 00:21:30.155 07:30:52 -- scripts/common.sh@234 -- # local progif 00:21:30.155 07:30:52 -- scripts/common.sh@235 -- # printf %02x 1 00:21:30.155 07:30:52 -- scripts/common.sh@235 -- # class=01 00:21:30.155 07:30:52 -- scripts/common.sh@236 -- # printf %02x 8 00:21:30.155 07:30:52 -- scripts/common.sh@236 -- # subclass=08 00:21:30.155 07:30:52 -- scripts/common.sh@237 -- # printf %02x 2 00:21:30.155 07:30:52 -- scripts/common.sh@237 -- # progif=02 00:21:30.156 07:30:52 -- scripts/common.sh@239 -- # hash lspci 00:21:30.156 07:30:52 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:21:30.156 07:30:52 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:21:30.156 07:30:52 -- scripts/common.sh@242 -- # grep -i -- -p02 00:21:30.156 07:30:52 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:30.156 07:30:52 -- scripts/common.sh@244 -- # tr -d '"' 00:21:30.156 07:30:52 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:30.156 07:30:52 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:21:30.156 07:30:52 -- scripts/common.sh@15 -- # local i 00:21:30.156 07:30:52 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:21:30.156 07:30:52 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:30.156 07:30:52 -- scripts/common.sh@24 -- # return 0 00:21:30.156 07:30:52 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:21:30.156 07:30:52 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:30.156 07:30:52 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:21:30.156 07:30:52 -- scripts/common.sh@15 -- # local i 00:21:30.156 07:30:52 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:21:30.156 07:30:52 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:30.156 07:30:52 -- scripts/common.sh@24 -- # return 0 00:21:30.156 07:30:52 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:21:30.156 07:30:52 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:21:30.156 07:30:52 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:21:30.156 07:30:52 -- scripts/common.sh@322 -- # uname -s 00:21:30.156 07:30:52 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:21:30.156 07:30:52 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:21:30.156 07:30:52 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:21:30.156 07:30:52 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:21:30.156 07:30:52 -- scripts/common.sh@322 -- # uname -s 00:21:30.156 07:30:52 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:21:30.156 07:30:52 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:21:30.156 07:30:52 -- scripts/common.sh@327 -- # (( 2 )) 00:21:30.156 07:30:52 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:30.156 07:30:52 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:21:30.156 07:30:52 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:21:30.156 07:30:52 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:21:30.156 07:30:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:30.156 07:30:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:30.156 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:21:30.156 ************************************ 00:21:30.156 START TEST spdk_target_abort 00:21:30.156 ************************************ 00:21:30.156 07:30:52 -- common/autotest_common.sh@1114 -- # spdk_target 00:21:30.156 07:30:52 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:30.156 07:30:52 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:21:30.156 07:30:52 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:21:30.156 07:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.156 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:21:30.156 spdk_targetn1 00:21:30.156 07:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.156 07:30:52 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:30.156 07:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.156 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:21:30.156 [2024-11-28 07:30:52.428683] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.416 07:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:21:30.416 07:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.416 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:21:30.416 07:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:21:30.416 07:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.416 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:21:30.416 07:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:21:30.416 07:30:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.416 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:21:30.416 [2024-11-28 07:30:52.460829] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.416 07:30:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:30.416 07:30:52 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:33.705 Initializing NVMe Controllers 00:21:33.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:21:33.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:21:33.705 Initialization complete. Launching workers. 00:21:33.705 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10494, failed: 0 00:21:33.705 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1046, failed to submit 9448 00:21:33.705 success 831, unsuccess 215, failed 0 00:21:33.705 07:30:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:33.705 07:30:55 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:36.995 Initializing NVMe Controllers 00:21:36.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:21:36.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:21:36.995 Initialization complete. Launching workers. 00:21:36.995 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8967, failed: 0 00:21:36.995 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1166, failed to submit 7801 00:21:36.995 success 367, unsuccess 799, failed 0 00:21:36.995 07:30:59 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:36.995 07:30:59 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:40.286 Initializing NVMe Controllers 00:21:40.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:21:40.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:21:40.286 Initialization complete. Launching workers. 00:21:40.286 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32649, failed: 0 00:21:40.286 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2310, failed to submit 30339 00:21:40.286 success 580, unsuccess 1730, failed 0 00:21:40.286 07:31:02 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:21:40.286 07:31:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.286 07:31:02 -- common/autotest_common.sh@10 -- # set +x 00:21:40.286 07:31:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.286 07:31:02 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:40.286 07:31:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.286 07:31:02 -- common/autotest_common.sh@10 -- # set +x 00:21:40.546 07:31:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.546 07:31:02 -- target/abort_qd_sizes.sh@62 -- # killprocess 88398 00:21:40.546 07:31:02 -- common/autotest_common.sh@936 -- # '[' -z 88398 ']' 00:21:40.546 07:31:02 -- common/autotest_common.sh@940 -- # kill -0 88398 00:21:40.546 07:31:02 -- common/autotest_common.sh@941 -- # uname 00:21:40.546 07:31:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:40.546 07:31:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88398 00:21:40.546 07:31:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:40.546 07:31:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:40.546 killing process with pid 88398 00:21:40.546 07:31:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88398' 00:21:40.546 07:31:02 -- common/autotest_common.sh@955 -- # kill 88398 00:21:40.546 07:31:02 -- common/autotest_common.sh@960 -- # wait 88398 00:21:40.806 ************************************ 00:21:40.806 END TEST spdk_target_abort 00:21:40.806 ************************************ 00:21:40.806 00:21:40.806 real 0m10.473s 00:21:40.806 user 0m42.876s 00:21:40.806 sys 0m1.786s 00:21:40.806 07:31:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:40.806 07:31:02 -- common/autotest_common.sh@10 -- # set +x 00:21:40.806 07:31:02 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:21:40.806 07:31:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:40.806 07:31:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:40.806 07:31:02 -- common/autotest_common.sh@10 -- # set +x 00:21:40.806 ************************************ 00:21:40.806 START TEST kernel_target_abort 00:21:40.806 ************************************ 00:21:40.806 07:31:02 -- common/autotest_common.sh@1114 -- # kernel_target 00:21:40.806 07:31:02 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:21:40.806 07:31:02 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:21:40.806 07:31:02 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:21:40.806 07:31:02 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:21:40.806 07:31:02 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:21:40.806 07:31:02 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:40.806 07:31:02 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:40.806 07:31:02 -- nvmf/common.sh@627 -- # local block nvme 00:21:40.806 07:31:02 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:21:40.806 07:31:02 -- nvmf/common.sh@630 -- # modprobe nvmet 00:21:40.806 07:31:02 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:40.806 07:31:02 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:41.065 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:41.065 Waiting for block devices as requested 00:21:41.324 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:21:41.325 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:21:41.325 07:31:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:41.325 07:31:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:41.325 07:31:03 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:21:41.325 07:31:03 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:21:41.325 07:31:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:41.325 No valid GPT data, bailing 00:21:41.325 07:31:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:41.325 07:31:03 -- scripts/common.sh@393 -- # pt= 00:21:41.325 07:31:03 -- scripts/common.sh@394 -- # return 1 00:21:41.325 07:31:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:21:41.325 07:31:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:41.325 07:31:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:41.325 07:31:03 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:21:41.325 07:31:03 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:21:41.325 07:31:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:41.584 No valid GPT data, bailing 00:21:41.584 07:31:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:41.584 07:31:03 -- scripts/common.sh@393 -- # pt= 00:21:41.584 07:31:03 -- scripts/common.sh@394 -- # return 1 00:21:41.584 07:31:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:21:41.584 07:31:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:41.584 07:31:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:21:41.584 07:31:03 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:21:41.584 07:31:03 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:21:41.584 07:31:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:21:41.584 No valid GPT data, bailing 00:21:41.584 07:31:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:21:41.584 07:31:03 -- scripts/common.sh@393 -- # pt= 00:21:41.584 07:31:03 -- scripts/common.sh@394 -- # return 1 00:21:41.584 07:31:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:21:41.584 07:31:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:41.584 07:31:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:21:41.584 07:31:03 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:21:41.584 07:31:03 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:21:41.584 07:31:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:21:41.584 No valid GPT data, bailing 00:21:41.584 07:31:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:21:41.584 07:31:03 -- scripts/common.sh@393 -- # pt= 00:21:41.584 07:31:03 -- scripts/common.sh@394 -- # return 1 00:21:41.584 07:31:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:21:41.584 07:31:03 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:21:41.584 07:31:03 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:21:41.584 07:31:03 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:41.584 07:31:03 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:41.584 07:31:03 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:21:41.584 07:31:03 -- nvmf/common.sh@654 -- # echo 1 00:21:41.584 07:31:03 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:21:41.584 07:31:03 -- nvmf/common.sh@656 -- # echo 1 00:21:41.584 07:31:03 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:21:41.584 07:31:03 -- nvmf/common.sh@663 -- # echo tcp 00:21:41.584 07:31:03 -- nvmf/common.sh@664 -- # echo 4420 00:21:41.584 07:31:03 -- nvmf/common.sh@665 -- # echo ipv4 00:21:41.584 07:31:03 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:41.584 07:31:03 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:42ff1b5c-407a-478a-8c45-326c3d19865f --hostid=42ff1b5c-407a-478a-8c45-326c3d19865f -a 10.0.0.1 -t tcp -s 4420 00:21:41.843 00:21:41.843 Discovery Log Number of Records 2, Generation counter 2 00:21:41.843 =====Discovery Log Entry 0====== 00:21:41.843 trtype: tcp 00:21:41.843 adrfam: ipv4 00:21:41.843 subtype: current discovery subsystem 00:21:41.843 treq: not specified, sq flow control disable supported 00:21:41.843 portid: 1 00:21:41.843 trsvcid: 4420 00:21:41.843 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:41.843 traddr: 10.0.0.1 00:21:41.843 eflags: none 00:21:41.843 sectype: none 00:21:41.843 =====Discovery Log Entry 1====== 00:21:41.843 trtype: tcp 00:21:41.843 adrfam: ipv4 00:21:41.843 subtype: nvme subsystem 00:21:41.843 treq: not specified, sq flow control disable supported 00:21:41.843 portid: 1 00:21:41.843 trsvcid: 4420 00:21:41.843 subnqn: kernel_target 00:21:41.843 traddr: 10.0.0.1 00:21:41.843 eflags: none 00:21:41.843 sectype: none 00:21:41.843 07:31:03 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:21:41.843 07:31:03 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:41.843 07:31:03 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:41.843 07:31:03 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:41.844 07:31:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:45.161 Initializing NVMe Controllers 00:21:45.161 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:45.161 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:45.161 Initialization complete. Launching workers. 00:21:45.161 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30580, failed: 0 00:21:45.161 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30580, failed to submit 0 00:21:45.161 success 0, unsuccess 30580, failed 0 00:21:45.161 07:31:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:45.161 07:31:07 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:48.449 Initializing NVMe Controllers 00:21:48.449 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:48.449 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:48.449 Initialization complete. Launching workers. 00:21:48.449 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 65815, failed: 0 00:21:48.449 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27117, failed to submit 38698 00:21:48.449 success 0, unsuccess 27117, failed 0 00:21:48.449 07:31:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:48.449 07:31:10 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:51.738 Initializing NVMe Controllers 00:21:51.738 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:51.738 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:51.738 Initialization complete. Launching workers. 00:21:51.738 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 69883, failed: 0 00:21:51.738 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 17476, failed to submit 52407 00:21:51.738 success 0, unsuccess 17476, failed 0 00:21:51.738 07:31:13 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:21:51.738 07:31:13 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:21:51.738 07:31:13 -- nvmf/common.sh@677 -- # echo 0 00:21:51.738 07:31:13 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:21:51.738 07:31:13 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:51.738 07:31:13 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:51.738 07:31:13 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:21:51.738 07:31:13 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:21:51.738 07:31:13 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:21:51.738 00:21:51.738 real 0m10.567s 00:21:51.738 user 0m5.148s 00:21:51.738 sys 0m2.632s 00:21:51.738 07:31:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:51.738 ************************************ 00:21:51.738 END TEST kernel_target_abort 00:21:51.738 ************************************ 00:21:51.738 07:31:13 -- common/autotest_common.sh@10 -- # set +x 00:21:51.738 07:31:13 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:21:51.738 07:31:13 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:21:51.738 07:31:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:51.738 07:31:13 -- nvmf/common.sh@116 -- # sync 00:21:51.738 07:31:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:51.738 07:31:13 -- nvmf/common.sh@119 -- # set +e 00:21:51.738 07:31:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:51.738 07:31:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:51.738 rmmod nvme_tcp 00:21:51.738 rmmod nvme_fabrics 00:21:51.738 rmmod nvme_keyring 00:21:51.738 07:31:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:51.738 07:31:13 -- nvmf/common.sh@123 -- # set -e 00:21:51.738 07:31:13 -- nvmf/common.sh@124 -- # return 0 00:21:51.738 07:31:13 -- nvmf/common.sh@477 -- # '[' -n 88398 ']' 00:21:51.738 07:31:13 -- nvmf/common.sh@478 -- # killprocess 88398 00:21:51.738 07:31:13 -- common/autotest_common.sh@936 -- # '[' -z 88398 ']' 00:21:51.738 07:31:13 -- common/autotest_common.sh@940 -- # kill -0 88398 00:21:51.738 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (88398) - No such process 00:21:51.738 Process with pid 88398 is not found 00:21:51.738 07:31:13 -- common/autotest_common.sh@963 -- # echo 'Process with pid 88398 is not found' 00:21:51.738 07:31:13 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:21:51.738 07:31:13 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:51.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:52.255 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:21:52.255 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:21:52.255 07:31:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:52.255 07:31:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:52.255 07:31:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.255 07:31:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:52.255 07:31:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.255 07:31:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:52.255 07:31:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.255 07:31:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:52.255 00:21:52.255 real 0m24.721s 00:21:52.255 user 0m49.519s 00:21:52.255 sys 0m5.804s 00:21:52.255 07:31:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:52.255 07:31:14 -- common/autotest_common.sh@10 -- # set +x 00:21:52.255 ************************************ 00:21:52.255 END TEST nvmf_abort_qd_sizes 00:21:52.255 ************************************ 00:21:52.255 07:31:14 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:21:52.255 07:31:14 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:21:52.255 07:31:14 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:21:52.255 07:31:14 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:52.255 07:31:14 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:52.255 07:31:14 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:21:52.255 07:31:14 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:52.255 07:31:14 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:52.255 07:31:14 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:21:52.255 07:31:14 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:52.255 07:31:14 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:52.255 07:31:14 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:21:52.255 07:31:14 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:21:52.255 07:31:14 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:21:52.255 07:31:14 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:21:52.255 07:31:14 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:21:52.255 07:31:14 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:21:52.256 07:31:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.256 07:31:14 -- common/autotest_common.sh@10 -- # set +x 00:21:52.256 07:31:14 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:21:52.256 07:31:14 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:21:52.256 07:31:14 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:21:52.256 07:31:14 -- common/autotest_common.sh@10 -- # set +x 00:21:54.160 INFO: APP EXITING 00:21:54.160 INFO: killing all VMs 00:21:54.160 INFO: killing vhost app 00:21:54.160 INFO: EXIT DONE 00:21:54.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:54.727 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:21:54.727 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:21:55.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:55.554 Cleaning 00:21:55.554 Removing: /var/run/dpdk/spdk0/config 00:21:55.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:55.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:55.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:55.554 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:55.554 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:55.554 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:55.554 Removing: /var/run/dpdk/spdk1/config 00:21:55.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:55.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:55.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:55.554 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:55.554 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:55.554 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:55.554 Removing: /var/run/dpdk/spdk2/config 00:21:55.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:55.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:55.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:55.554 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:55.554 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:55.554 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:55.554 Removing: /var/run/dpdk/spdk3/config 00:21:55.554 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:55.554 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:55.555 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:55.555 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:55.555 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:55.555 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:55.555 Removing: /var/run/dpdk/spdk4/config 00:21:55.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:55.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:55.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:55.555 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:55.555 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:55.555 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:55.555 Removing: /dev/shm/nvmf_trace.0 00:21:55.555 Removing: /dev/shm/spdk_tgt_trace.pid65878 00:21:55.555 Removing: /var/run/dpdk/spdk0 00:21:55.555 Removing: /var/run/dpdk/spdk1 00:21:55.555 Removing: /var/run/dpdk/spdk2 00:21:55.555 Removing: /var/run/dpdk/spdk3 00:21:55.555 Removing: /var/run/dpdk/spdk4 00:21:55.555 Removing: /var/run/dpdk/spdk_pid65715 00:21:55.555 Removing: /var/run/dpdk/spdk_pid65878 00:21:55.555 Removing: /var/run/dpdk/spdk_pid66131 00:21:55.555 Removing: /var/run/dpdk/spdk_pid66327 00:21:55.555 Removing: /var/run/dpdk/spdk_pid66480 00:21:55.555 Removing: /var/run/dpdk/spdk_pid66557 00:21:55.555 Removing: /var/run/dpdk/spdk_pid66640 00:21:55.555 Removing: /var/run/dpdk/spdk_pid66738 00:21:55.555 Removing: /var/run/dpdk/spdk_pid66811 00:21:55.555 Removing: /var/run/dpdk/spdk_pid66855 00:21:55.555 Removing: /var/run/dpdk/spdk_pid66885 00:21:55.555 Removing: /var/run/dpdk/spdk_pid66954 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67053 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67498 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67550 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67601 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67617 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67692 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67708 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67776 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67792 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67834 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67856 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67901 00:21:55.555 Removing: /var/run/dpdk/spdk_pid67919 00:21:55.555 Removing: /var/run/dpdk/spdk_pid68056 00:21:55.555 Removing: /var/run/dpdk/spdk_pid68092 00:21:55.555 Removing: /var/run/dpdk/spdk_pid68173 00:21:55.555 Removing: /var/run/dpdk/spdk_pid68229 00:21:55.555 Removing: /var/run/dpdk/spdk_pid68249 00:21:55.555 Removing: /var/run/dpdk/spdk_pid68313 00:21:55.555 Removing: /var/run/dpdk/spdk_pid68333 00:21:55.555 Removing: /var/run/dpdk/spdk_pid68367 00:21:55.555 Removing: /var/run/dpdk/spdk_pid68387 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68421 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68441 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68475 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68495 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68529 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68549 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68583 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68603 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68642 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68657 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68692 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68711 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68746 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68765 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68800 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68819 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68854 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68873 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68908 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68927 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68962 00:21:55.814 Removing: /var/run/dpdk/spdk_pid68980 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69016 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69034 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69072 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69086 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69126 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69140 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69180 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69203 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69240 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69267 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69300 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69325 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69354 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69374 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69409 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69486 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69592 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69924 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69940 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69978 00:21:55.814 Removing: /var/run/dpdk/spdk_pid69990 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70004 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70026 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70040 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70059 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70077 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70095 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70109 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70132 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70145 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70158 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70176 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70194 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70208 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70226 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70238 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70257 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70287 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70299 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70331 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70397 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70423 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70433 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70467 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70475 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70484 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70525 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70537 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70563 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70571 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70584 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70586 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70599 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70601 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70614 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70616 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70648 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70674 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70684 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70718 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70722 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70736 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70771 00:21:55.814 Removing: /var/run/dpdk/spdk_pid70788 00:21:56.073 Removing: /var/run/dpdk/spdk_pid70809 00:21:56.073 Removing: /var/run/dpdk/spdk_pid70822 00:21:56.074 Removing: /var/run/dpdk/spdk_pid70824 00:21:56.074 Removing: /var/run/dpdk/spdk_pid70837 00:21:56.074 Removing: /var/run/dpdk/spdk_pid70845 00:21:56.074 Removing: /var/run/dpdk/spdk_pid70852 00:21:56.074 Removing: /var/run/dpdk/spdk_pid70860 00:21:56.074 Removing: /var/run/dpdk/spdk_pid70867 00:21:56.074 Removing: /var/run/dpdk/spdk_pid70947 00:21:56.074 Removing: /var/run/dpdk/spdk_pid70991 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71103 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71139 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71184 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71198 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71213 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71233 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71268 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71277 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71353 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71367 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71426 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71494 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71550 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71578 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71671 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71717 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71753 00:21:56.074 Removing: /var/run/dpdk/spdk_pid71972 00:21:56.074 Removing: /var/run/dpdk/spdk_pid72064 00:21:56.074 Removing: /var/run/dpdk/spdk_pid72097 00:21:56.074 Removing: /var/run/dpdk/spdk_pid72422 00:21:56.074 Removing: /var/run/dpdk/spdk_pid72466 00:21:56.074 Removing: /var/run/dpdk/spdk_pid72781 00:21:56.074 Removing: /var/run/dpdk/spdk_pid73200 00:21:56.074 Removing: /var/run/dpdk/spdk_pid73469 00:21:56.074 Removing: /var/run/dpdk/spdk_pid74273 00:21:56.074 Removing: /var/run/dpdk/spdk_pid75114 00:21:56.074 Removing: /var/run/dpdk/spdk_pid75231 00:21:56.074 Removing: /var/run/dpdk/spdk_pid75299 00:21:56.074 Removing: /var/run/dpdk/spdk_pid76582 00:21:56.074 Removing: /var/run/dpdk/spdk_pid76808 00:21:56.074 Removing: /var/run/dpdk/spdk_pid77131 00:21:56.074 Removing: /var/run/dpdk/spdk_pid77242 00:21:56.074 Removing: /var/run/dpdk/spdk_pid77376 00:21:56.074 Removing: /var/run/dpdk/spdk_pid77403 00:21:56.074 Removing: /var/run/dpdk/spdk_pid77431 00:21:56.074 Removing: /var/run/dpdk/spdk_pid77458 00:21:56.074 Removing: /var/run/dpdk/spdk_pid77561 00:21:56.074 Removing: /var/run/dpdk/spdk_pid77690 00:21:56.074 Removing: /var/run/dpdk/spdk_pid77845 00:21:56.074 Removing: /var/run/dpdk/spdk_pid77926 00:21:56.074 Removing: /var/run/dpdk/spdk_pid78320 00:21:56.074 Removing: /var/run/dpdk/spdk_pid78675 00:21:56.074 Removing: /var/run/dpdk/spdk_pid78677 00:21:56.074 Removing: /var/run/dpdk/spdk_pid80911 00:21:56.074 Removing: /var/run/dpdk/spdk_pid80914 00:21:56.074 Removing: /var/run/dpdk/spdk_pid81201 00:21:56.074 Removing: /var/run/dpdk/spdk_pid81215 00:21:56.074 Removing: /var/run/dpdk/spdk_pid81235 00:21:56.074 Removing: /var/run/dpdk/spdk_pid81266 00:21:56.074 Removing: /var/run/dpdk/spdk_pid81271 00:21:56.074 Removing: /var/run/dpdk/spdk_pid81360 00:21:56.074 Removing: /var/run/dpdk/spdk_pid81362 00:21:56.074 Removing: /var/run/dpdk/spdk_pid81470 00:21:56.074 Removing: /var/run/dpdk/spdk_pid81472 00:21:56.074 Removing: /var/run/dpdk/spdk_pid81586 00:21:56.074 Removing: /var/run/dpdk/spdk_pid81592 00:21:56.074 Removing: /var/run/dpdk/spdk_pid82002 00:21:56.074 Removing: /var/run/dpdk/spdk_pid82046 00:21:56.074 Removing: /var/run/dpdk/spdk_pid82155 00:21:56.074 Removing: /var/run/dpdk/spdk_pid82236 00:21:56.074 Removing: /var/run/dpdk/spdk_pid82548 00:21:56.074 Removing: /var/run/dpdk/spdk_pid82753 00:21:56.074 Removing: /var/run/dpdk/spdk_pid83140 00:21:56.074 Removing: /var/run/dpdk/spdk_pid83679 00:21:56.074 Removing: /var/run/dpdk/spdk_pid84137 00:21:56.074 Removing: /var/run/dpdk/spdk_pid84197 00:21:56.074 Removing: /var/run/dpdk/spdk_pid84258 00:21:56.074 Removing: /var/run/dpdk/spdk_pid84325 00:21:56.074 Removing: /var/run/dpdk/spdk_pid84446 00:21:56.074 Removing: /var/run/dpdk/spdk_pid84505 00:21:56.333 Removing: /var/run/dpdk/spdk_pid84561 00:21:56.333 Removing: /var/run/dpdk/spdk_pid84621 00:21:56.333 Removing: /var/run/dpdk/spdk_pid84958 00:21:56.333 Removing: /var/run/dpdk/spdk_pid86140 00:21:56.333 Removing: /var/run/dpdk/spdk_pid86286 00:21:56.333 Removing: /var/run/dpdk/spdk_pid86530 00:21:56.333 Removing: /var/run/dpdk/spdk_pid87094 00:21:56.333 Removing: /var/run/dpdk/spdk_pid87254 00:21:56.333 Removing: /var/run/dpdk/spdk_pid87415 00:21:56.333 Removing: /var/run/dpdk/spdk_pid87512 00:21:56.333 Removing: /var/run/dpdk/spdk_pid87680 00:21:56.333 Removing: /var/run/dpdk/spdk_pid87790 00:21:56.333 Removing: /var/run/dpdk/spdk_pid88449 00:21:56.333 Removing: /var/run/dpdk/spdk_pid88484 00:21:56.333 Removing: /var/run/dpdk/spdk_pid88519 00:21:56.333 Removing: /var/run/dpdk/spdk_pid88770 00:21:56.333 Removing: /var/run/dpdk/spdk_pid88806 00:21:56.333 Removing: /var/run/dpdk/spdk_pid88838 00:21:56.333 Clean 00:21:56.333 killing process with pid 60049 00:21:56.333 killing process with pid 60051 00:21:56.333 07:31:18 -- common/autotest_common.sh@1446 -- # return 0 00:21:56.333 07:31:18 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:21:56.333 07:31:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:56.333 07:31:18 -- common/autotest_common.sh@10 -- # set +x 00:21:56.333 07:31:18 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:21:56.333 07:31:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:56.333 07:31:18 -- common/autotest_common.sh@10 -- # set +x 00:21:56.592 07:31:18 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:56.592 07:31:18 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:56.592 07:31:18 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:56.592 07:31:18 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:21:56.592 07:31:18 -- spdk/autotest.sh@383 -- # hostname 00:21:56.592 07:31:18 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:56.851 geninfo: WARNING: invalid characters removed from testname! 00:22:18.780 07:31:38 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:19.039 07:31:41 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:21.571 07:31:43 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:23.473 07:31:45 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:26.083 07:31:47 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:27.989 07:31:49 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:30.524 07:31:52 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:30.524 07:31:52 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:22:30.524 07:31:52 -- common/autotest_common.sh@1690 -- $ lcov --version 00:22:30.524 07:31:52 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:22:30.524 07:31:52 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:22:30.524 07:31:52 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:22:30.524 07:31:52 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:22:30.524 07:31:52 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:22:30.524 07:31:52 -- scripts/common.sh@335 -- $ IFS=.-: 00:22:30.524 07:31:52 -- scripts/common.sh@335 -- $ read -ra ver1 00:22:30.524 07:31:52 -- scripts/common.sh@336 -- $ IFS=.-: 00:22:30.524 07:31:52 -- scripts/common.sh@336 -- $ read -ra ver2 00:22:30.524 07:31:52 -- scripts/common.sh@337 -- $ local 'op=<' 00:22:30.524 07:31:52 -- scripts/common.sh@339 -- $ ver1_l=2 00:22:30.524 07:31:52 -- scripts/common.sh@340 -- $ ver2_l=1 00:22:30.524 07:31:52 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:22:30.524 07:31:52 -- scripts/common.sh@343 -- $ case "$op" in 00:22:30.524 07:31:52 -- scripts/common.sh@344 -- $ : 1 00:22:30.524 07:31:52 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:22:30.524 07:31:52 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.524 07:31:52 -- scripts/common.sh@364 -- $ decimal 1 00:22:30.524 07:31:52 -- scripts/common.sh@352 -- $ local d=1 00:22:30.524 07:31:52 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:22:30.524 07:31:52 -- scripts/common.sh@354 -- $ echo 1 00:22:30.524 07:31:52 -- scripts/common.sh@364 -- $ ver1[v]=1 00:22:30.524 07:31:52 -- scripts/common.sh@365 -- $ decimal 2 00:22:30.524 07:31:52 -- scripts/common.sh@352 -- $ local d=2 00:22:30.524 07:31:52 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:22:30.524 07:31:52 -- scripts/common.sh@354 -- $ echo 2 00:22:30.524 07:31:52 -- scripts/common.sh@365 -- $ ver2[v]=2 00:22:30.524 07:31:52 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:22:30.524 07:31:52 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:22:30.524 07:31:52 -- scripts/common.sh@367 -- $ return 0 00:22:30.524 07:31:52 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.524 07:31:52 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:22:30.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.524 --rc genhtml_branch_coverage=1 00:22:30.524 --rc genhtml_function_coverage=1 00:22:30.524 --rc genhtml_legend=1 00:22:30.524 --rc geninfo_all_blocks=1 00:22:30.524 --rc geninfo_unexecuted_blocks=1 00:22:30.524 00:22:30.524 ' 00:22:30.524 07:31:52 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:22:30.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.524 --rc genhtml_branch_coverage=1 00:22:30.524 --rc genhtml_function_coverage=1 00:22:30.524 --rc genhtml_legend=1 00:22:30.524 --rc geninfo_all_blocks=1 00:22:30.524 --rc geninfo_unexecuted_blocks=1 00:22:30.525 00:22:30.525 ' 00:22:30.525 07:31:52 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:22:30.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.525 --rc genhtml_branch_coverage=1 00:22:30.525 --rc genhtml_function_coverage=1 00:22:30.525 --rc genhtml_legend=1 00:22:30.525 --rc geninfo_all_blocks=1 00:22:30.525 --rc geninfo_unexecuted_blocks=1 00:22:30.525 00:22:30.525 ' 00:22:30.525 07:31:52 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:22:30.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.525 --rc genhtml_branch_coverage=1 00:22:30.525 --rc genhtml_function_coverage=1 00:22:30.525 --rc genhtml_legend=1 00:22:30.525 --rc geninfo_all_blocks=1 00:22:30.525 --rc geninfo_unexecuted_blocks=1 00:22:30.525 00:22:30.525 ' 00:22:30.525 07:31:52 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:30.525 07:31:52 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:30.525 07:31:52 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.525 07:31:52 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.525 07:31:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.525 07:31:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.525 07:31:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.525 07:31:52 -- paths/export.sh@5 -- $ export PATH 00:22:30.525 07:31:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.525 07:31:52 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:30.525 07:31:52 -- common/autobuild_common.sh@440 -- $ date +%s 00:22:30.525 07:31:52 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732779112.XXXXXX 00:22:30.525 07:31:52 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732779112.OAdj5K 00:22:30.525 07:31:52 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:22:30.525 07:31:52 -- common/autobuild_common.sh@446 -- $ '[' -n v23.11 ']' 00:22:30.525 07:31:52 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:22:30.525 07:31:52 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:22:30.525 07:31:52 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:30.525 07:31:52 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:30.525 07:31:52 -- common/autobuild_common.sh@456 -- $ get_config_params 00:22:30.525 07:31:52 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:22:30.525 07:31:52 -- common/autotest_common.sh@10 -- $ set +x 00:22:30.525 07:31:52 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:22:30.525 07:31:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:30.525 07:31:52 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:30.525 07:31:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:30.525 07:31:52 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:22:30.525 07:31:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:30.525 07:31:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:30.525 07:31:52 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:30.525 07:31:52 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:30.525 07:31:52 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:30.525 07:31:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:30.525 + [[ -n 5919 ]] 00:22:30.525 + sudo kill 5919 00:22:30.535 [Pipeline] } 00:22:30.551 [Pipeline] // timeout 00:22:30.557 [Pipeline] } 00:22:30.573 [Pipeline] // stage 00:22:30.579 [Pipeline] } 00:22:30.593 [Pipeline] // catchError 00:22:30.604 [Pipeline] stage 00:22:30.607 [Pipeline] { (Stop VM) 00:22:30.620 [Pipeline] sh 00:22:30.901 + vagrant halt 00:22:33.437 ==> default: Halting domain... 00:22:40.018 [Pipeline] sh 00:22:40.297 + vagrant destroy -f 00:22:42.832 ==> default: Removing domain... 00:22:43.411 [Pipeline] sh 00:22:43.692 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:22:43.701 [Pipeline] } 00:22:43.716 [Pipeline] // stage 00:22:43.722 [Pipeline] } 00:22:43.737 [Pipeline] // dir 00:22:43.742 [Pipeline] } 00:22:43.758 [Pipeline] // wrap 00:22:43.765 [Pipeline] } 00:22:43.778 [Pipeline] // catchError 00:22:43.788 [Pipeline] stage 00:22:43.791 [Pipeline] { (Epilogue) 00:22:43.803 [Pipeline] sh 00:22:44.085 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:48.292 [Pipeline] catchError 00:22:48.294 [Pipeline] { 00:22:48.307 [Pipeline] sh 00:22:48.587 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:48.587 Artifacts sizes are good 00:22:48.596 [Pipeline] } 00:22:48.610 [Pipeline] // catchError 00:22:48.622 [Pipeline] archiveArtifacts 00:22:48.629 Archiving artifacts 00:22:48.760 [Pipeline] cleanWs 00:22:48.774 [WS-CLEANUP] Deleting project workspace... 00:22:48.774 [WS-CLEANUP] Deferred wipeout is used... 00:22:48.796 [WS-CLEANUP] done 00:22:48.798 [Pipeline] } 00:22:48.818 [Pipeline] // stage 00:22:48.823 [Pipeline] } 00:22:48.838 [Pipeline] // node 00:22:48.846 [Pipeline] End of Pipeline 00:22:48.887 Finished: SUCCESS